id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
2b59ae71-7f07-4b4b-b96e-dd898f0e70fd | Count the number of rows {#count-row-numbers}
Open a new tab in the SQL Console of ClickHouse Cloud (or a new
clickhouse-client
window) and watch the count increase.
It will take a while to insert 4.56B rows, depending on your server resources. (Without any tweaking of settings, it takes about 4.5 hours.)
sql
SELECT formatReadableQuantity(count())
FROM youtube
response
ββformatReadableQuantity(count())ββ
β 4.56 billion β
βββββββββββββββββββββββββββββββββββ
Explore the data {#explore-the-data}
Once the data is inserted, go ahead and count the number of dislikes of your favorite videos or channels. Let's see how many videos were uploaded by ClickHouse:
sql
SELECT count()
FROM youtube
WHERE uploader = 'ClickHouse';
```response
ββcount()ββ
β 84 β
βββββββββββ
1 row in set. Elapsed: 0.570 sec. Processed 237.57 thousand rows, 5.77 MB (416.54 thousand rows/s., 10.12 MB/s.)
```
:::note
The query above runs so quickly because we chose
uploader
as the first column of the primary key - so it only had to process 237k rows.
:::
Let's look and likes and dislikes of ClickHouse videos:
sql
SELECT
title,
like_count,
dislike_count
FROM youtube
WHERE uploader = 'ClickHouse'
ORDER BY dislike_count DESC;
The response looks like:
```response
ββtitleβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βlike_countββ¬βdislike_countββ
β ClickHouse v21.11 Release Webinar β 52 β 3 β
β ClickHouse Introduction β 97 β 3 β
β Casa Modelo Algarve β 180 β 3 β
β ΠΡΠΎΡΠ°ΠΉΠ»Π΅Ρ Π·Π°ΠΏΡΠΎΡΠΎΠ²: ΡΡΡΠ΄Π½ΡΠΉ ΠΏΡΡΡ β 33 β 3 β
β ClickHouse Π² ΠΡΡΡΠΎΠΌΠ΅ΡΡΠ΅ β 4 β 2 β
β 10 Good Reasons to Use ClickHouse β 27 β 2 β
...
84 rows in set. Elapsed: 0.013 sec. Processed 155.65 thousand rows, 16.94 MB (11.96 million rows/s., 1.30 GB/s.)
```
Here is a search for videos with
ClickHouse
in the
title
or
description
fields:
sql
SELECT
view_count,
like_count,
dislike_count,
concat('https://youtu.be/', id) AS url,
title
FROM youtube
WHERE (title ILIKE '%ClickHouse%') OR (description ILIKE '%ClickHouse%')
ORDER BY
like_count DESC,
view_count DESC;
This query has to process every row, and also parse through two columns of strings. Even then, we get decent performance at 4.15M rows/second:
response
1174 rows in set. Elapsed: 1099.368 sec. Processed 4.56 billion rows, 1.98 TB (4.15 million rows/s., 1.80 GB/s.) | {"source_file": "youtube-dislikes.md"} | [
0.06096687167882919,
-0.11603061854839325,
-0.04081161692738533,
0.03326932340860367,
-0.034452907741069794,
0.012982094660401344,
0.0738319531083107,
-0.03695637360215187,
0.08703657984733582,
0.04827504977583885,
0.017288396134972572,
-0.013815336860716343,
0.09796030819416046,
-0.052499... |
9282cbfb-e989-418d-8f09-a98ff13039c3 | response
1174 rows in set. Elapsed: 1099.368 sec. Processed 4.56 billion rows, 1.98 TB (4.15 million rows/s., 1.80 GB/s.)
The results look like:
response
ββview_countββ¬βlike_countββ¬βdislike_countββ¬βurlβββββββββββββββββββββββββββ¬βtitleβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1919 β 63 β 1 β https://youtu.be/b9MeoOtAivQ β ClickHouse v21.10 Release Webinar β
β 8710 β 62 β 4 β https://youtu.be/PeV1mC2z--M β What is JDBC DriverManager? | JDBC β
β 3534 β 62 β 1 β https://youtu.be/8nWRhK9gw10 β CLICKHOUSE - Arquitetura Modular β
Questions {#questions}
If someone disables comments does it lower the chance someone will actually click like or dislike? {#if-someone-disables-comments-does-it-lower-the-chance-someone-will-actually-click-like-or-dislike}
When commenting is disabled, are people more likely to like or dislike to express their feelings about a video?
sql
SELECT
concat('< ', formatReadableQuantity(view_range)) AS views,
is_comments_enabled,
total_clicks / num_views AS prob_like_dislike
FROM
(
SELECT
is_comments_enabled,
power(10, CEILING(log10(view_count + 1))) AS view_range,
sum(like_count + dislike_count) AS total_clicks,
sum(view_count) AS num_views
FROM youtube
GROUP BY
view_range,
is_comments_enabled
) WHERE view_range > 1
ORDER BY
is_comments_enabled ASC,
num_views ASC; | {"source_file": "youtube-dislikes.md"} | [
-0.054224979132413864,
-0.08247828483581543,
0.02970557101070881,
0.02919684909284115,
0.01797279343008995,
-0.03504704684019089,
0.09610934555530548,
0.005089261569082737,
0.020776789635419846,
-0.010593373328447342,
0.022014154121279716,
-0.0030232074204832315,
0.014203036203980446,
-0.0... |
e8a351cd-724b-4ade-a12c-d8b188029345 | ```response
ββviewsββββββββββββββ¬βis_comments_enabledββ¬ββββprob_like_dislikeββ
β < 10.00 β false β 0.08224180712685371 β
β < 100.00 β false β 0.06346337759167248 β
β < 1.00 thousand β false β 0.03201883652987105 β
β < 10.00 thousand β false β 0.01716073540410903 β
β < 10.00 billion β false β 0.004555639481829971 β
β < 100.00 thousand β false β 0.01293351460515323 β
β < 1.00 billion β false β 0.004761811192464957 β
β < 1.00 million β false β 0.010472604018980551 β
β < 10.00 million β false β 0.00788902538420125 β
β < 100.00 million β false β 0.00579152804250582 β
β < 10.00 β true β 0.09819517478134059 β
β < 100.00 β true β 0.07403784478585775 β
β < 1.00 thousand β true β 0.03846294910067627 β
β < 10.00 billion β true β 0.005615217329358215 β
β < 10.00 thousand β true β 0.02505881391701455 β
β < 1.00 billion β true β 0.007434998802482997 β
β < 100.00 thousand β true β 0.022694648130822004 β
β < 100.00 million β true β 0.011761563746575625 β
β < 1.00 million β true β 0.020776022304589435 β
β < 10.00 million β true β 0.016917095718089584 β
βββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββββββ
22 rows in set. Elapsed: 8.460 sec. Processed 4.56 billion rows, 77.48 GB (538.73 million rows/s., 9.16 GB/s.)
```
Enabling comments seems to be correlated with a higher rate of engagement.
How does the number of videos change over time - notable events? {#how-does-the-number-of-videos-change-over-time---notable-events}
sql
SELECT
toStartOfMonth(toDateTime(upload_date)) AS month,
uniq(uploader_id) AS uploaders,
count() AS num_videos,
sum(view_count) AS view_count
FROM youtube
GROUP BY month
ORDER BY month ASC; | {"source_file": "youtube-dislikes.md"} | [
-0.03726746141910553,
-0.04116787016391754,
-0.02481013536453247,
0.06378574669361115,
-0.008750081062316895,
-0.09837654232978821,
0.05185367166996002,
-0.009550266899168491,
0.04249829053878784,
0.07248897105455399,
0.04716469720005989,
-0.029715947806835175,
0.07634606957435608,
-0.0095... |
ae414bb6-1223-4bed-97c1-3d2ac1388010 | response
βββββββmonthββ¬βuploadersββ¬βnum_videosββ¬βββview_countββ
β 2005-04-01 β 5 β 6 β 213597737 β
β 2005-05-01 β 6 β 9 β 2944005 β
β 2005-06-01 β 165 β 351 β 18624981 β
β 2005-07-01 β 395 β 1168 β 94164872 β
β 2005-08-01 β 1171 β 3128 β 124540774 β
β 2005-09-01 β 2418 β 5206 β 475536249 β
β 2005-10-01 β 6750 β 13747 β 737593613 β
β 2005-11-01 β 13706 β 28078 β 1896116976 β
β 2005-12-01 β 24756 β 49885 β 2478418930 β
β 2006-01-01 β 49992 β 100447 β 4532656581 β
β 2006-02-01 β 67882 β 138485 β 5677516317 β
β 2006-03-01 β 103358 β 212237 β 8430301366 β
β 2006-04-01 β 114615 β 234174 β 9980760440 β
β 2006-05-01 β 152682 β 332076 β 14129117212 β
β 2006-06-01 β 193962 β 429538 β 17014143263 β
β 2006-07-01 β 234401 β 530311 β 18721143410 β
β 2006-08-01 β 281280 β 614128 β 20473502342 β
β 2006-09-01 β 312434 β 679906 β 23158422265 β
β 2006-10-01 β 404873 β 897590 β 27357846117 β
A spike of uploaders
around covid is noticeable
.
More subtitles over time and when {#more-subtitles-over-time-and-when}
With advances in speech recognition, it's easier than ever to create subtitles for video with youtube adding auto-captioning in late 2009 - was the jump then?
sql
SELECT
toStartOfMonth(upload_date) AS month,
countIf(has_subtitles) / count() AS percent_subtitles,
percent_subtitles - any(percent_subtitles) OVER (
ORDER BY month ASC ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING
) AS previous
FROM youtube
GROUP BY month
ORDER BY month ASC;
```response
βββββββmonthββ¬βββpercent_subtitlesββ¬ββββββββββββββββpreviousββ
β 2015-01-01 β 0.2652653881082824 β 0.2652653881082824 β
β 2015-02-01 β 0.3147556050309162 β 0.049490216922633834 β
β 2015-03-01 β 0.32460464492371877 β 0.009849039892802558 β
β 2015-04-01 β 0.33471963051468445 β 0.010114985590965686 β
β 2015-05-01 β 0.3168087575501062 β -0.017910872964578273 β
β 2015-06-01 β 0.3162609788438222 β -0.0005477787062839745 β
β 2015-07-01 β 0.31828767677518033 β 0.0020266979313581235 β
β 2015-08-01 β 0.3045551564286859 β -0.013732520346494415 β
β 2015-09-01 β 0.311221133995152 β 0.006665977566466086 β
β 2015-10-01 β 0.30574870926812175 β -0.005472424727030245 β
β 2015-11-01 β 0.31125409712077234 β 0.0055053878526505895 β
β 2015-12-01 β 0.3190967954651779 β 0.007842698344405541 β
β 2016-01-01 β 0.32636021432496176 β 0.007263418859783877 β
```
The data results show a spike in 2009. Apparently at that, time YouTube was removing their community captions feature, which allowed you to upload captions for other people's video.
This prompted a very successful campaign to have creators add captions to their videos for hard of hearing and deaf viewers.
Top uploaders over time {#top-uploaders-over-time} | {"source_file": "youtube-dislikes.md"} | [
-0.0384845994412899,
0.024668626487255096,
-0.021796340122818947,
0.03239550068974495,
-0.010011397302150726,
-0.08385064452886581,
0.005729726981371641,
0.006535158026963472,
-0.005450059659779072,
0.0895993635058403,
0.03708774968981743,
-0.007706001400947571,
0.04212633892893791,
-0.037... |
c8b31892-f1ec-4ba7-988d-3d66a7ea62a2 | Top uploaders over time {#top-uploaders-over-time}
sql
WITH uploaders AS
(
SELECT uploader
FROM youtube
GROUP BY uploader
ORDER BY sum(view_count) DESC
LIMIT 10
)
SELECT
month,
uploader,
sum(view_count) AS total_views,
avg(dislike_count / like_count) AS like_to_dislike_ratio
FROM youtube
WHERE uploader IN (uploaders)
GROUP BY
toStartOfMonth(upload_date) AS month,
uploader
ORDER BY
month ASC,
total_views DESC;
response
βββββββmonthββ¬βuploaderββββββββββββββββββββ¬βtotal_viewsββ¬βlike_to_dislike_ratioββ
β 1970-01-01 β T-Series β 10957099 β 0.022784656361208206 β
β 1970-01-01 β Ryan's World β 0 β 0.003035559410234172 β
β 1970-01-01 β SET India β 0 β nan β
β 2006-09-01 β Cocomelon - Nursery Rhymes β 256406497 β 0.7005566715978622 β
β 2007-06-01 β Cocomelon - Nursery Rhymes β 33641320 β 0.7088650914344298 β
β 2008-02-01 β WWE β 43733469 β 0.07198856488734842 β
β 2008-03-01 β WWE β 16514541 β 0.1230603715431997 β
β 2008-04-01 β WWE β 5907295 β 0.2089399470159618 β
β 2008-05-01 β WWE β 7779627 β 0.09101676560436774 β
β 2008-06-01 β WWE β 7018780 β 0.0974184753155297 β
β 2008-07-01 β WWE β 4686447 β 0.1263845422065158 β
β 2008-08-01 β WWE β 4514312 β 0.08384574274791441 β
β 2008-09-01 β WWE β 3717092 β 0.07872802579349912 β
How do like ratio changes as views go up? {#how-do-like-ratio-changes-as-views-go-up}
sql
SELECT
concat('< ', formatReadableQuantity(view_range)) AS view_range,
is_comments_enabled,
round(like_ratio, 2) AS like_ratio
FROM
(
SELECT
power(10, CEILING(log10(view_count + 1))) AS view_range,
is_comments_enabled,
avg(like_count / dislike_count) AS like_ratio
FROM youtube WHERE dislike_count > 0
GROUP BY
view_range,
is_comments_enabled HAVING view_range > 1
ORDER BY
view_range ASC,
is_comments_enabled ASC
); | {"source_file": "youtube-dislikes.md"} | [
-0.0024216410238295794,
-0.12692312896251678,
-0.04834849014878273,
0.056933898478746414,
0.01185578852891922,
0.05185047909617424,
0.05449628084897995,
-0.034967392683029175,
0.0023058978840708733,
0.04095906764268875,
0.07291907072067261,
-0.04837768152356148,
0.05724029988050461,
-0.014... |
c44c2e66-7b07-4ed1-ae98-c1961b26e5ff | response
ββview_rangeβββββββββ¬βis_comments_enabledββ¬βlike_ratioββ
β < 10.00 β false β 0.66 β
β < 10.00 β true β 0.66 β
β < 100.00 β false β 3 β
β < 100.00 β true β 3.95 β
β < 1.00 thousand β false β 8.45 β
β < 1.00 thousand β true β 13.07 β
β < 10.00 thousand β false β 18.57 β
β < 10.00 thousand β true β 30.92 β
β < 100.00 thousand β false β 23.55 β
β < 100.00 thousand β true β 42.13 β
β < 1.00 million β false β 19.23 β
β < 1.00 million β true β 37.86 β
β < 10.00 million β false β 12.13 β
β < 10.00 million β true β 30.72 β
β < 100.00 million β false β 6.67 β
β < 100.00 million β true β 23.32 β
β < 1.00 billion β false β 3.08 β
β < 1.00 billion β true β 20.69 β
β < 10.00 billion β false β 1.77 β
β < 10.00 billion β true β 19.5 β
βββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββ
How are views distributed? {#how-are-views-distributed}
sql
SELECT
labels AS percentile,
round(quantiles) AS views
FROM
(
SELECT
quantiles(0.999, 0.99, 0.95, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1)(view_count) AS quantiles,
['99.9th', '99th', '95th', '90th', '80th', '70th','60th', '50th', '40th', '30th', '20th', '10th'] AS labels
FROM youtube
)
ARRAY JOIN
quantiles,
labels;
response
ββpercentileββ¬βββviewsββ
β 99.9th β 1216624 β
β 99th β 143519 β
β 95th β 13542 β
β 90th β 4054 β
β 80th β 950 β
β 70th β 363 β
β 60th β 177 β
β 50th β 97 β
β 40th β 57 β
β 30th β 32 β
β 20th β 16 β
β 10th β 6 β
ββββββββββββββ΄ββββββββββ | {"source_file": "youtube-dislikes.md"} | [
-0.02950475364923477,
0.013996010646224022,
-0.03606121987104416,
0.07991353422403336,
0.0026207019109278917,
-0.09827417135238647,
0.04143402725458145,
-0.007273945491760969,
0.02128695137798786,
0.09903004765510559,
0.038867972791194916,
-0.0849011242389679,
0.08366856724023819,
-0.02451... |
2a446bee-7e5c-4473-8f54-32e12568ef5a | description: 'Dataset consisting of two tables containing anonymized web analytics
data with hits and visits'
sidebar_label: 'Anonymized web analytics'
slug: /getting-started/example-datasets/metrica
keywords: ['web analytics data', 'anonymized data', 'website traffic data', 'example dataset', 'getting started']
title: 'Anonymized web analytics'
doc_type: 'guide'
Anonymized web analytics data
This dataset consists of two tables containing anonymized web analytics data with hits (
hits_v1
) and visits (
visits_v1
).
The tables can be downloaded as compressed
tsv.xz
files. In addition to the sample worked with in this document, an extended (7.5GB) version of the
hits
table containing 100 million rows is available as TSV at
https://datasets.clickhouse.com/hits/tsv/hits_100m_obfuscated_v1.tsv.xz
.
Download and ingest the data {#download-and-ingest-the-data}
Download the hits compressed TSV file {#download-the-hits-compressed-tsv-file}
``bash
curl https://datasets.clickhouse.com/hits/tsv/hits_v1.tsv.xz | unxz --threads=
nproc` > hits_v1.tsv
Validate the checksum
md5sum hits_v1.tsv
Checksum should be equal to: f3631b6295bf06989c1437491f7592cb
```
Create the database and table {#create-the-database-and-table}
bash
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
For hits_v1 | {"source_file": "anon_web_analytics_metrica.md"} | [
-0.05901256203651428,
0.0019315379904583097,
-0.09357232600450516,
0.08415812999010086,
0.02775081992149353,
-0.09569745510816574,
0.07380328327417374,
0.0011178248096257448,
-0.024475524201989174,
0.05899645388126373,
0.051084183156490326,
-0.0034443307667970657,
0.07118413597345352,
-0.0... |
dd434f33-1dae-4c92-9ae7-3e378fb5a7fa | bash | {"source_file": "anon_web_analytics_metrica.md"} | [
0.04660592973232269,
0.04472070559859276,
-0.0543510839343071,
-0.014888063073158264,
-0.015997320413589478,
0.015274093486368656,
0.14817704260349274,
0.05366526171565056,
0.0529615581035614,
0.011088280938565731,
0.012236982583999634,
0.00711818877607584,
0.05829260125756264,
-0.01846610... |
b2c3ecc0-4649-4f74-93e5-8dd775cbde5e | clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY | {"source_file": "anon_web_analytics_metrica.md"} | [
0.046896472573280334,
-0.007800644729286432,
-0.07952014356851578,
0.004175643436610699,
-0.069439597427845,
0.0000018133076764570433,
0.04484327509999275,
-0.03038785047829151,
-0.03158089891076088,
0.016364865005016327,
0.015261606313288212,
-0.08847454190254211,
0.026312269270420074,
-0... |
fdb3758a-6cf0-408a-82a4-e7fb124873d7 | String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" | {"source_file": "anon_web_analytics_metrica.md"} | [
0.08101499825716019,
0.028779970481991768,
0.011179903522133827,
-0.048611730337142944,
-0.10557550936937332,
-0.0183505080640316,
0.049439139664173126,
0.08719577640295029,
-0.019993968307971954,
0.0022745647002011538,
0.031189464032649994,
-0.03615783900022507,
0.010382520966231823,
-0.0... |
40ba947b-d6dc-4bea-8249-35dc2267549c | Or for hits_100m_obfuscated
bash
clickhouse-client --query="CREATE TABLE default.hits_100m_obfuscated (WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, Refresh UInt8, RefererCategoryID UInt16, RefererRegionID UInt32, URLCategoryID UInt16, URLRegionID UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, OriginalURL String, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), LocalEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, RemoteIP UInt32, WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming UInt32, DNSTiming UInt32, ConnectTiming UInt32, ResponseStartTiming UInt32, ResponseEndTiming UInt32, FetchTiming UInt32, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"
Import the hits data {#import-the-hits-data}
bash
cat hits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.hits_v1 FORMAT TSV" --max_insert_block_size=100000
Verify the count of rows
bash
clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
response
8873898
Download the visits compressed TSV file {#download-the-visits-compressed-tsv-file}
``bash
curl https://datasets.clickhouse.com/visits/tsv/visits_v1.tsv.xz | unxz --threads=
nproc` > visits_v1.tsv
Validate the checksum
md5sum visits_v1.tsv
Checksum should be equal to: 6dafe1a0f24e59e3fc2d0fed85601de6 | {"source_file": "anon_web_analytics_metrica.md"} | [
0.010220317170023918,
0.03322073072195053,
-0.09534479677677155,
0.008979536592960358,
-0.017960581928491592,
-0.015085941180586815,
0.0663776844739914,
0.0415186770260334,
-0.03028946742415428,
0.016251567751169205,
0.021633116528391838,
-0.03387398645281792,
0.07059761881828308,
-0.04152... |
317ae6b5-aecd-4e1e-890b-45d14f60b66c | Validate the checksum
md5sum visits_v1.tsv
Checksum should be equal to: 6dafe1a0f24e59e3fc2d0fed85601de6
```
Create the visits table {#create-the-visits-table} | {"source_file": "anon_web_analytics_metrica.md"} | [
0.01721673272550106,
-0.02658265084028244,
-0.04489205405116081,
-0.017185397446155548,
0.010886000469326973,
-0.0489896796643734,
0.05901721119880676,
-0.044942159205675125,
-0.040373384952545166,
0.10214998573064804,
-0.020754633471369743,
-0.044947072863578796,
0.047155119478702545,
0.0... |
5ab1ed35-d449-4698-a5d9-e21c6b5c16ec | bash | {"source_file": "anon_web_analytics_metrica.md"} | [
0.04660592973232269,
0.04472070559859276,
-0.0543510839343071,
-0.014888063073158264,
-0.015997320413589478,
0.015274093486368656,
0.14817704260349274,
0.05366526171565056,
0.0529615581035614,
0.011088280938565731,
0.012236982583999634,
0.00711818877607584,
0.05829260125756264,
-0.01846610... |
a7a3668a-2db5-4b6f-af40-9726c8fd1e93 | clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, | {"source_file": "anon_web_analytics_metrica.md"} | [
0.05436433106660843,
-0.019639335572719574,
-0.07723738253116608,
0.028764359652996063,
-0.07528124004602432,
0.0290616936981678,
0.053577907383441925,
-0.04017167165875435,
-0.016288289800286293,
0.0010476905154064298,
0.03025757521390915,
-0.076897032558918,
0.059775419533252716,
-0.0706... |
b804dfc8-2271-45a0-a5b7-059e6cea845f | String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" | {"source_file": "anon_web_analytics_metrica.md"} | [
-0.000762571522500366,
-0.0006358679966069758,
-0.05967727676033974,
-0.07781515270471573,
-0.049351904541254044,
0.009733932092785835,
0.02531087026000023,
0.0013093086890876293,
-0.027418894693255424,
0.021544955670833588,
0.0626751109957695,
-0.006481626071035862,
0.08951079845428467,
-... |
0b45fb08-c258-4486-8f5f-bf1d54edbf66 | Import the visits data {#import-the-visits-data}
bash
cat visits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.visits_v1 FORMAT TSV" --max_insert_block_size=100000
Verify the count
bash
clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
response
1680609
An example JOIN {#an-example-join}
The hits and visits dataset is used in the ClickHouse test
routines, this is one of the queries from the test suite. The rest
of the tests are referenced in the
Next Steps
section at the
end of this page.
sql
clickhouse-client --query "SELECT
EventDate,
hits,
visits
FROM
(
SELECT
EventDate,
count() AS hits
FROM datasets.hits_v1
GROUP BY EventDate
) ANY LEFT JOIN
(
SELECT
StartDate AS EventDate,
sum(Sign) AS visits
FROM datasets.visits_v1
GROUP BY EventDate
) USING EventDate
ORDER BY hits DESC
LIMIT 10
SETTINGS joined_subquery_requires_alias = 0
FORMAT PrettyCompact"
response
βββEventDateββ¬ββββhitsββ¬βvisitsββ
β 2014-03-17 β 1406958 β 265108 β
β 2014-03-19 β 1405797 β 261624 β
β 2014-03-18 β 1383658 β 258723 β
β 2014-03-20 β 1353623 β 255328 β
β 2014-03-21 β 1245779 β 236232 β
β 2014-03-23 β 1046491 β 202212 β
β 2014-03-22 β 1031592 β 197354 β
ββββββββββββββ΄ββββββββββ΄βββββββββ
Next steps {#next-steps}
A Practical Introduction to Sparse Primary Indexes in ClickHouse
uses the hits dataset to discuss the differences in ClickHouse indexing compared to traditional relational databases, how ClickHouse builds and uses a sparse primary index, and indexing best practices.
Additional examples of queries to these tables can be found among the ClickHouse
stateful tests
.
:::note
The test suite uses a database name
test
, and the tables are named
hits
and
visits
. You can rename your database and tables, or edit the SQL from the test file.
::: | {"source_file": "anon_web_analytics_metrica.md"} | [
0.055428896099328995,
-0.027284884825348854,
-0.003505272790789604,
0.10330019891262054,
-0.08898206055164337,
-0.026139114052057266,
0.022636398673057556,
0.010034739039838314,
-0.03962942585349083,
0.014076050370931625,
-0.0016267092432826757,
-0.024969838559627533,
0.0539153628051281,
-... |
988b8151-f980-4b98-b657-805409a26799 | description: 'Learn how to load OpenCelliD data into ClickHouse, connect Apache Superset
to ClickHouse and build a dashboard based on data'
sidebar_label: 'Cell towers'
slug: /getting-started/example-datasets/cell-towers
title: 'Geo Data using the Cell Tower Dataset'
keywords: ['cell tower data', 'geo data', 'OpenCelliD', 'geospatial dataset', 'getting started']
doc_type: 'guide'
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import Image from '@theme/IdealImage';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CodeBlock from '@theme/CodeBlock';
import ActionsMenu from '@site/docs/_snippets/_service_actions_menu.md';
import SQLConsoleDetail from '@site/docs/_snippets/_launch_sql_console.md';
import SupersetDocker from '@site/docs/_snippets/_add_superset_detail.md';
import cloud_load_data_sample from '@site/static/images/_snippets/cloud-load-data-sample.png';
import cell_towers_1 from '@site/static/images/getting-started/example-datasets/superset-cell-tower-dashboard.png'
import add_a_database from '@site/static/images/getting-started/example-datasets/superset-add.png'
import choose_clickhouse_connect from '@site/static/images/getting-started/example-datasets/superset-choose-a-database.png'
import add_clickhouse_as_superset_datasource from '@site/static/images/getting-started/example-datasets/superset-connect-a-database.png'
import add_cell_towers_table_as_dataset from '@site/static/images/getting-started/example-datasets/superset-add-dataset.png'
import create_a_map_in_superset from '@site/static/images/getting-started/example-datasets/superset-create-map.png'
import specify_long_and_lat from '@site/static/images/getting-started/example-datasets/superset-lon-lat.png'
import superset_mcc_2024 from '@site/static/images/getting-started/example-datasets/superset-mcc-204.png'
import superset_radio_umts from '@site/static/images/getting-started/example-datasets/superset-radio-umts.png'
import superset_umts_netherlands from '@site/static/images/getting-started/example-datasets/superset-umts-netherlands.png'
import superset_cell_tower_dashboard from '@site/static/images/getting-started/example-datasets/superset-cell-tower-dashboard.png'
Goal {#goal}
In this guide you will learn how to:
- Load the OpenCelliD data in ClickHouse
- Connect Apache Superset to ClickHouse
- Build a dashboard based on data available in the dataset
Here is a preview of the dashboard created in this guide:
Get the dataset {#get-the-dataset}
This dataset is from
OpenCelliD
- The world's largest Open Database of Cell Towers.
As of 2021, it contains more than 40 million records about cell towers (GSM, LTE, UMTS, etc.) around the world with their geographical coordinates and metadata (country code, network, etc.). | {"source_file": "cell-towers.md"} | [
0.029849251732230186,
-0.0038197762332856655,
-0.06117204204201698,
-0.02004207670688629,
-0.09315671771764755,
-0.010807499289512634,
0.00401597935706377,
0.0712035521864891,
-0.02438409812748432,
-0.0079755038022995,
0.07657238841056824,
-0.027002708986401558,
0.06900957971811295,
-0.070... |
83e08f84-7ef3-4f82-bba3-e2cfa8b01a4c | As of 2021, it contains more than 40 million records about cell towers (GSM, LTE, UMTS, etc.) around the world with their geographical coordinates and metadata (country code, network, etc.).
OpenCelliD Project is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, and we redistribute a snapshot of this dataset under the terms of the same license. The up-to-date version of the dataset is available to download after sign in.
Load the sample data {#load-the-sample-data}
ClickHouse Cloud provides an easy-button for uploading this dataset from S3. Log in to your ClickHouse Cloud organization, or create a free trial at
ClickHouse.cloud
.
Choose the
Cell Towers
dataset from the
Sample data
tab, and
Load data
:
Examine the schema of the cell_towers table {#examine-the-schema-of-the-cell_towers-table}
sql
DESCRIBE TABLE cell_towers
This is the output of
DESCRIBE
. Down further in this guide the field type choices will be described.
response
ββnameβββββββββββ¬βtypeβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬
β radio β Enum8('' = 0, 'CDMA' = 1, 'GSM' = 2, 'LTE' = 3, 'NR' = 4, 'UMTS' = 5) β
β mcc β UInt16 β
β net β UInt16 β
β area β UInt16 β
β cell β UInt64 β
β unit β Int16 β
β lon β Float64 β
β lat β Float64 β
β range β UInt32 β
β samples β UInt32 β
β changeable β UInt8 β
β created β DateTime β
β updated β DateTime β
β averageSignal β UInt8 β
βββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄
Create a table:
sql
CREATE TABLE cell_towers
(
radio Enum8('' = 0, 'CDMA' = 1, 'GSM' = 2, 'LTE' = 3, 'NR' = 4, 'UMTS' = 5),
mcc UInt16,
net UInt16,
area UInt16,
cell UInt64,
unit Int16,
lon Float64,
lat Float64,
range UInt32,
samples UInt32,
changeable UInt8,
created DateTime,
updated DateTime,
averageSignal UInt8
)
ENGINE = MergeTree ORDER BY (radio, mcc, net, created);
Import the dataset from a public S3 bucket (686 MB): | {"source_file": "cell-towers.md"} | [
-0.009251940064132214,
-0.04259319230914116,
-0.05156182497739792,
-0.038838163018226624,
0.0012956972932443023,
-0.007276127580553293,
-0.058991942554712296,
-0.0452008917927742,
0.030062634497880936,
0.04482338950037956,
0.06050474941730499,
-0.03433206304907799,
0.07554519921541214,
-0.... |
a642148a-072e-4d74-8556-8e93f627bd98 | Import the dataset from a public S3 bucket (686 MB):
sql
INSERT INTO cell_towers SELECT * FROM s3('https://datasets-documentation.s3.amazonaws.com/cell_towers/cell_towers.csv.xz', 'CSVWithNames')
Run some example queries {#examples}
A number of cell towers by type:
sql
SELECT radio, count() AS c FROM cell_towers GROUP BY radio ORDER BY c DESC
```response
ββradioββ¬ββββββββcββ
β UMTS β 20686487 β
β LTE β 12101148 β
β GSM β 9931304 β
β CDMA β 556344 β
β NR β 867 β
βββββββββ΄βββββββββββ
5 rows in set. Elapsed: 0.011 sec. Processed 43.28 million rows, 43.28 MB (3.83 billion rows/s., 3.83 GB/s.)
```
Cell towers by
mobile country code (MCC)
:
sql
SELECT mcc, count() FROM cell_towers GROUP BY mcc ORDER BY count() DESC LIMIT 10
```response
ββmccββ¬βcount()ββ
β 310 β 5024650 β
β 262 β 2622423 β
β 250 β 1953176 β
β 208 β 1891187 β
β 724 β 1836150 β
β 404 β 1729151 β
β 234 β 1618924 β
β 510 β 1353998 β
β 440 β 1343355 β
β 311 β 1332798 β
βββββββ΄ββββββββββ
10 rows in set. Elapsed: 0.019 sec. Processed 43.28 million rows, 86.55 MB (2.33 billion rows/s., 4.65 GB/s.)
```
Based on the above query and the
MCC list
, the countries with the most cell towers are: the USA, Germany, and Russia.
You may want to create a
Dictionary
in ClickHouse to decode these values.
Use case: incorporate geo data {#use-case}
Using the
pointInPolygon
function.
Create a table where we will store polygons:
sql
CREATE TABLE moscow (polygon Array(Tuple(Float64, Float64)))
ORDER BY polygon;
sql
CREATE TEMPORARY TABLE
moscow (polygon Array(Tuple(Float64, Float64)));
This is a rough shape of Moscow (without "new Moscow"): | {"source_file": "cell-towers.md"} | [
0.011389786377549171,
-0.057906560599803925,
-0.07274176925420761,
0.017870847135782242,
-0.012060794979333878,
-0.055688850581645966,
-0.00521763926371932,
-0.03291609138250351,
0.018450234085321426,
0.07043188065290451,
0.05479399114847183,
-0.06667432934045792,
0.12122742086648941,
-0.1... |
c13a93cf-d4ac-47d8-8f0b-521a23f961a0 | sql
INSERT INTO moscow VALUES ([(37.84172564285271, 55.78000432402266),
(37.8381207618713, 55.775874525970494), (37.83979446823122, 55.775626746008065), (37.84243326983639, 55.77446586811748), (37.84262672750849, 55.771974101091104), (37.84153238623039, 55.77114545193181), (37.841124690460184, 55.76722010265554),
(37.84239076983644, 55.76654891107098), (37.842283558197025, 55.76258709833121), (37.8421759312134, 55.758073999993734), (37.84198330422974, 55.75381499999371), (37.8416827275085, 55.749277102484484), (37.84157576190186, 55.74794544108413),
(37.83897929098507, 55.74525257875241), (37.83739676451868, 55.74404373042019), (37.838732481460525, 55.74298009816793), (37.841183997352545, 55.743060321833575), (37.84097476190185, 55.73938799999373), (37.84048155819702, 55.73570799999372),
(37.840095812164286, 55.73228210777237), (37.83983814285274, 55.73080491981639), (37.83846476321406, 55.729799917464675), (37.83835745269769, 55.72919751082619), (37.838636380279524, 55.72859509486539), (37.8395161005249, 55.727705075632784),
(37.83897964285276, 55.722727886185154), (37.83862557539366, 55.72034817326636), (37.83559735744853, 55.71944437307499), (37.835370708803126, 55.71831419154461), (37.83738169402022, 55.71765218986692), (37.83823396494291, 55.71691750159089),
(37.838056931213345, 55.71547311301385), (37.836812846557606, 55.71221445615604), (37.83522525396725, 55.709331054395555), (37.83269301586908, 55.70953687463627), (37.829667367706236, 55.70903403789297), (37.83311126588435, 55.70552351822608),
(37.83058993121339, 55.70041317726053), (37.82983872750851, 55.69883771404813), (37.82934501586913, 55.69718947487017), (37.828926414016685, 55.69504441658371), (37.82876530422971, 55.69287499999378), (37.82894754100031, 55.690759754047335),
(37.827697554878185, 55.68951421135665), (37.82447346292115, 55.68965045405069), (37.83136543914793, 55.68322046195302), (37.833554015869154, 55.67814012759211), (37.83544184655761, 55.67295011628339), (37.837480388885474, 55.6672498719639),
(37.838960677246064, 55.66316274139358), (37.83926093121332, 55.66046999999383), (37.839025050262435, 55.65869897264431), (37.83670784390257, 55.65794084879904), (37.835656529083245, 55.65694309303843), (37.83704060449217, 55.65689306460552),
(37.83696819873806, 55.65550363526252), (37.83760389616388, 55.65487847246661), (37.83687972750851, 55.65356745541324), (37.83515216004943, 55.65155951234079), (37.83312418518067, 55.64979413590619), (37.82801726983639, 55.64640836412121),
(37.820614174591, 55.64164525405531), (37.818908190475426, 55.6421883258084), (37.81717543386075, 55.64112490388471), (37.81690987037274, 55.63916106913107), (37.815099354492155, 55.637925371757085), (37.808769150787356, 55.633798276884455),
(37.80100123544311, 55.62873670012244), (37.79598013491824, 55.62554336109055), (37.78634567724606, 55.62033499605651), (37.78334147619623, 55.618768681480326), (37.77746201055901, 55.619855533402706), (37.77527329626457, 55.61909966711279), | {"source_file": "cell-towers.md"} | [
0.05789320543408394,
-0.021605301648378372,
-0.030114345252513885,
0.03195003792643547,
-0.0825711190700531,
-0.007224237080663443,
0.06822624057531357,
0.002275067148730159,
-0.04897594824433327,
-0.00930190458893776,
-0.08026532828807831,
-0.00018978185835294425,
0.02081187628209591,
0.0... |
12119a49-7130-4fde-bf52-ab9e1fcfb916 | (37.77801986242668, 55.618770300976294), (37.778212973541216, 55.617257701952106), (37.77784818518065, 55.61574504433011), (37.77016867724609, 55.61148576294007), (37.760191219573976, 55.60599579539028), (37.75338926983641, 55.60227892751446),
(37.746329965606634, 55.59920577639331), (37.73939925396728, 55.59631430313617), (37.73273665739439, 55.5935318803559), (37.7299954450912, 55.59350760316188), (37.7268679946899, 55.59469840523759), (37.72626726983634, 55.59229549697373),
(37.7262673598022, 55.59081598950582), (37.71897193121335, 55.5877595845419), (37.70871550793456, 55.58393177431724), (37.700497489410374, 55.580917323756644), (37.69204305026244, 55.57778089778455), (37.68544477378839, 55.57815154690915),
(37.68391050793454, 55.57472945079756), (37.678803592590306, 55.57328235936491), (37.6743402539673, 55.57255251445782), (37.66813862698363, 55.57216388774464), (37.617927457672096, 55.57505691895805), (37.60443099999999, 55.5757737568051),
(37.599683515869145, 55.57749105910326), (37.59754177842709, 55.57796291823627), (37.59625834786988, 55.57906686095235), (37.59501783265684, 55.57746616444403), (37.593090671936025, 55.57671634534502), (37.587018007904, 55.577944600233785),
(37.578692203704804, 55.57982895000019), (37.57327546607398, 55.58116294118248), (37.57385012109279, 55.581550362779), (37.57399562266922, 55.5820107079112), (37.5735356072979, 55.58226289171689), (37.57290393054962, 55.582393529795155),
(37.57037722355653, 55.581919415056234), (37.5592298306885, 55.584471614867844), (37.54189249206543, 55.58867650795186), (37.5297256269836, 55.59158133551745), (37.517837865081766, 55.59443656218868), (37.51200186508174, 55.59635625174229),
(37.506808949737554, 55.59907823904434), (37.49820432275389, 55.6062944994944), (37.494406071441674, 55.60967103463367), (37.494760001358024, 55.61066689753365), (37.49397137107085, 55.61220931698269), (37.49016528606031, 55.613417718449064),
(37.48773249206542, 55.61530616333343), (37.47921386508177, 55.622640129112334), (37.470652153442394, 55.62993723476164), (37.46273446298218, 55.6368075123157), (37.46350692265317, 55.64068225239439), (37.46050283203121, 55.640794546982576),
(37.457627470916734, 55.64118904154646), (37.450718034393326, 55.64690488145138), (37.44239252645875, 55.65397824729769), (37.434587576721185, 55.66053543155961), (37.43582144975277, 55.661693766520735), (37.43576786245721, 55.662755031737014),
(37.430982915344174, 55.664610641628116), (37.428547447097685, 55.66778515273695), (37.42945134592044, 55.668633314343566), (37.42859571562949, 55.66948145750025), (37.4262836402282, 55.670813882451405), (37.418709037048295, 55.6811141674414),
(37.41922139651101, 55.68235377885389), (37.419218771842885, 55.68359335082235), (37.417196501327446, 55.684375235224735), (37.41607020370478, 55.68540557585352), (37.415640857147146, 55.68686637150793), (37.414632153442334, 55.68903015131686), | {"source_file": "cell-towers.md"} | [
0.0812523290514946,
-0.0135088711977005,
-0.0046395291574299335,
-0.0127014284953475,
-0.043875548988580704,
-0.05505368486046791,
-0.02876979112625122,
-0.04606189206242561,
-0.03329218551516533,
0.023576565086841583,
0.012154689989984035,
-0.038020189851522446,
0.034722261130809784,
-0.0... |
8d156be0-6a54-4bf3-9c76-2951493be9bd | (37.413344899475064, 55.690896881757396), (37.41171432275391, 55.69264232162232), (37.40948282275393, 55.69455101638112), (37.40703674603271, 55.69638690385348), (37.39607169577025, 55.70451821283731), (37.38952706878662, 55.70942491932811),
(37.387778313491815, 55.71149057784176), (37.39049275399779, 55.71419814298992), (37.385557272491454, 55.7155489617061), (37.38388335714726, 55.71849856042102), (37.378368238098155, 55.7292763261685), (37.37763597123337, 55.730845879211614),
(37.37890062088197, 55.73167906388319), (37.37750451918789, 55.734703664681774), (37.375610832015965, 55.734851959522246), (37.3723813571472, 55.74105626086403), (37.37014935714723, 55.746115620904355), (37.36944173016362, 55.750883999993725),
(37.36975304365541, 55.76335905525834), (37.37244070571134, 55.76432079697595), (37.3724259757175, 55.76636979670426), (37.369922155757884, 55.76735417953104), (37.369892695770275, 55.76823419316575), (37.370214730163575, 55.782312184391266),
(37.370493611114505, 55.78436801120489), (37.37120164550783, 55.78596427165359), (37.37284851456452, 55.7874378183096), (37.37608325135799, 55.7886695054807), (37.3764587460632, 55.78947647305964), (37.37530000265506, 55.79146512926804),
(37.38235915344241, 55.79899647809345), (37.384344043655396, 55.80113596939471), (37.38594269577028, 55.80322699999366), (37.38711208598329, 55.804919036911976), (37.3880239841309, 55.806610999993666), (37.38928977249147, 55.81001864976979),
(37.39038389947512, 55.81348641242801), (37.39235781481933, 55.81983538336746), (37.393709457672124, 55.82417822811877), (37.394685720901464, 55.82792275755836), (37.39557615344238, 55.830447148154136), (37.39844478226658, 55.83167107969975),
(37.40019761214057, 55.83151823557964), (37.400398790382326, 55.83264967594742), (37.39659544313046, 55.83322180909622), (37.39667059524539, 55.83402792148566), (37.39682089947515, 55.83638877400216), (37.39643489154053, 55.83861656112751),
(37.3955338994751, 55.84072348043264), (37.392680272491454, 55.84502158126453), (37.39241188227847, 55.84659117913199), (37.392529730163616, 55.84816071336481), (37.39486835714723, 55.85288092980303), (37.39873052645878, 55.859893456073635),
(37.40272161111449, 55.86441833633205), (37.40697072750854, 55.867579567544375), (37.410007082016016, 55.868369880337), (37.4120992989502, 55.86920843741314), (37.412668021163924, 55.87055369615854), (37.41482461111453, 55.87170587948249),
(37.41862266137694, 55.873183961039565), (37.42413732540892, 55.874879126654704), (37.4312182698669, 55.875614937236705), (37.43111093783558, 55.8762723478417), (37.43332105622856, 55.87706546369396), (37.43385747619623, 55.87790681284802),
(37.441303050262405, 55.88027084462084), (37.44747234260555, 55.87942070143253), (37.44716141796871, 55.88072960917233), (37.44769797085568, 55.88121221323979), (37.45204320500181, 55.882080694420715), (37.45673176190186, 55.882346110794586), | {"source_file": "cell-towers.md"} | [
0.08191739767789841,
-0.02638583816587925,
0.00357318134047091,
-0.013556545600295067,
-0.05037754774093628,
-0.04936935380101204,
-0.028960229828953743,
-0.04665452986955643,
-0.02222304977476597,
0.02936490997672081,
0.007445760071277618,
-0.03251029923558235,
0.040249645709991455,
-0.02... |
dfb426db-b7e4-4db0-a380-ccc62ae76281 | (37.463383999999984, 55.88252729504517), (37.46682797486874, 55.88294937719063), (37.470014457672086, 55.88361266759345), (37.47751410450743, 55.88546991372396), (37.47860317658232, 55.88534929207307), (37.48165826025772, 55.882563306475106),
(37.48316434442331, 55.8815803226785), (37.483831555817645, 55.882427612793315), (37.483182967125686, 55.88372791409729), (37.483092277908824, 55.88495581062434), (37.4855716508179, 55.8875561994203), (37.486440636245746, 55.887827444039566),
(37.49014203439328, 55.88897899871799), (37.493210285705544, 55.890208937135604), (37.497512451065035, 55.891342397444696), (37.49780744510645, 55.89174030252967), (37.49940333499519, 55.89239745507079), (37.50018383334346, 55.89339220941865),
(37.52421672750851, 55.903869074155224), (37.52977457672118, 55.90564076517974), (37.53503220370484, 55.90661661218259), (37.54042858064267, 55.90714113744566), (37.54320461007303, 55.905645048442985), (37.545686966066306, 55.906608607018505),
(37.54743976120755, 55.90788552162358), (37.55796999999999, 55.90901557907218), (37.572711542327866, 55.91059395704873), (37.57942799999998, 55.91073854155573), (37.58502865872187, 55.91009969268444), (37.58739968913264, 55.90794809960554),
(37.59131567193598, 55.908713267595054), (37.612687423278814, 55.902866854295375), (37.62348079629517, 55.90041967242986), (37.635797880950896, 55.898141151686396), (37.649487626983664, 55.89639275532968), (37.65619302513125, 55.89572360207488),
(37.66294133862307, 55.895295577183965), (37.66874564418033, 55.89505457604897), (37.67375601586915, 55.89254677027454), (37.67744661901856, 55.8947775867987), (37.688347, 55.89450045676125), (37.69480554232789, 55.89422926332761),
(37.70107096560668, 55.89322256101114), (37.705962965606716, 55.891763491662616), (37.711885134918205, 55.889110234998974), (37.71682005026245, 55.886577568759876), (37.7199315476074, 55.88458159806678), (37.72234560316464, 55.882281005794134),
(37.72364385977171, 55.8809452036196), (37.725371142837474, 55.8809722706006), (37.727870902099546, 55.88037213862385), (37.73394330422971, 55.877941504088696), (37.745339592590376, 55.87208120378722), (37.75525267724611, 55.86703807949492),
(37.76919976190188, 55.859821640197474), (37.827835219574, 55.82962968399116), (37.83341438888553, 55.82575289922351), (37.83652584655761, 55.82188784027888), (37.83809213491821, 55.81612575504693), (37.83605359521481, 55.81460347077685),
(37.83632178569025, 55.81276696067908), (37.838623105812026, 55.811486181656385), (37.83912198147584, 55.807329380532785), (37.839079078033414, 55.80510270463816), (37.83965844708251, 55.79940712529036), (37.840581150787344, 55.79131399999368),
(37.84172564285271, 55.78000432402266)]); | {"source_file": "cell-towers.md"} | [
0.08719641715288162,
-0.011925231665372849,
0.012036904692649841,
-0.016739433631300926,
-0.043427132070064545,
-0.057577431201934814,
-0.04125277325510979,
-0.05233825743198395,
-0.037566810846328735,
0.02334521897137165,
0.0164404958486557,
-0.03470759466290474,
0.03181084990501404,
-0.0... |
a0be66c7-ad53-4e3e-9df6-730188eec147 | Check how many cell towers are in Moscow:
sql
SELECT count() FROM cell_towers
WHERE pointInPolygon((lon, lat), (SELECT * FROM moscow))
```response
ββcount()ββ
β 310463 β
βββββββββββ
1 rows in set. Elapsed: 0.067 sec. Processed 43.28 million rows, 692.42 MB (645.83 million rows/s., 10.33 GB/s.)
```
Review of the schema {#review-of-the-schema}
Before building visualizations in Superset have a look at the columns that you will use. This dataset primarily provides the location (Longitude and Latitude) and radio types at mobile cellular towers worldwide. The column descriptions can be found in the
community forum
. The columns used in the visualizations that will be built are described below
Here is a description of the columns taken from the OpenCelliD forum:
| Column | Description |
|--------------|--------------------------------------------------------|
| radio | Technology generation: CDMA, GSM, UMTS, 5G NR |
| mcc | Mobile Country Code:
204
is The Netherlands |
| lon | Longitude: With Latitude, approximate tower location |
| lat | Latitude: With Longitude, approximate tower location |
:::tip mcc
To find your MCC check
Mobile network codes
, and use the three digits in the
Mobile country code
column.
:::
The schema for this table was designed for compact storage on disk and query speed.
- The
radio
data is stored as an
Enum8
(
UInt8
) rather than a string.
-
mcc
or Mobile country code, is stored as a
UInt16
as we know the range is 1 - 999.
-
lon
and
lat
are
Float64
.
None of the other fields are used in the queries or visualizations in this guide, but they are described in the forum linked above if you are interested.
Build visualizations with Apache Superset {#build-visualizations-with-apache-superset}
Superset is easy to run from Docker. If you already have Superset running, all you need to do is add ClickHouse Connect with
pip install clickhouse-connect
. If you need to install Superset open the
Launch Apache Superset in Docker
directly below.
To build a Superset dashboard using the OpenCelliD dataset you should:
- Add your ClickHouse service as a Superset
database
- Add the table
cell_towers
as a Superset
dataset
- Create some
charts
- Add the charts to a
dashboard
Add your ClickHouse service as a Superset database {#add-your-clickhouse-service-as-a-superset-database}
In Superset a database can be added by choosing the database type, and then providing the connection details. Open Superset and look for the
+
, it has a menu with
Data
and then
Connect database
options.
Choose
ClickHouse Connect
from the list:
:::note
If
ClickHouse Connect
is not one of your options, then you will need to install it. The command is
pip install clickhouse-connect
, and more info is
available here
.
:::
Add your connection details {#add-your-connection-details} | {"source_file": "cell-towers.md"} | [
0.08507370948791504,
-0.026456011459231377,
-0.05784204602241516,
-0.018691306933760643,
-0.04676557704806328,
0.004788650199770927,
-0.013632271438837051,
0.055255990475416183,
0.006452913396060467,
0.0010862122289836407,
0.014387108385562897,
-0.024791033938527107,
0.03452922776341438,
0... |
2247e73c-f164-4407-899a-01c255b78ab3 | Add your connection details {#add-your-connection-details}
:::tip
Make sure that you set
SSL
on when connecting to ClickHouse Cloud or other ClickHouse systems that enforce the use of SSL.
:::
Add the table
cell_towers
as a Superset
dataset
{#add-the-table-cell_towers-as-a-superset-dataset}
In Superset a
dataset
maps to a table within a database. Click on add a dataset and choose your ClickHouse service, the database containing your table (
default
), and choose the
cell_towers
table:
Create some
charts
{#create-some-charts}
When you choose to add a chart in Superset you have to specify the dataset (
cell_towers
) and the chart type. Since the OpenCelliD dataset provides longitude and latitude coordinates for cell towers we will create a
Map
chart. The
deck.gL Scatterplot
type is suited to this dataset as it works well with dense data points on a map.
Specify the query used for the map {#specify-the-query-used-for-the-map}
A deck.gl Scatterplot requires a longitude and latitude, and one or more filters can also be applied to the query. In this example two filters are applied, one for cell towers with UMTS radios, and one for the Mobile country code assigned to The Netherlands.
The fields
lon
and
lat
contain the longitude and latitude:
Add a filter with
mcc
=
204
(or substitute any other
mcc
value):
Add a filter with
radio
=
'UMTS'
(or substitute any other
radio
value, you can see the choices in the output of
DESCRIBE TABLE cell_towers
):
This is the full configuration for the chart that filters on
radio = 'UMTS'
and
mcc = 204
:
Click on
UPDATE CHART
to render the visualization.
Add the charts to a
dashboard
{#add-the-charts-to-a-dashboard}
This screenshot shows cell tower locations with LTE, UMTS, and GSM radios. The charts are all created in the same way, and they are added to a dashboard.
:::tip
The data is also available for interactive queries in the
Playground
.
This
example
will populate the username and even the query for you.
Although you cannot create tables in the Playground, you can run all of the queries and even use Superset (adjust the host name and port number).
::: | {"source_file": "cell-towers.md"} | [
0.005744066089391708,
-0.06504840403795242,
-0.02050853706896305,
0.034947339445352554,
-0.07664718478918076,
-0.004370562732219696,
-0.0293679591268301,
-0.010382835753262043,
0.025884907692670822,
-0.002376129385083914,
0.03616565465927124,
-0.03525833785533905,
0.017198780551552773,
0.0... |
942e00d2-1234-422d-9594-59f32520f746 | description: 'Over 20 billion records of data from Sensor.Community, a contributors-driven
global sensor network that creates Open Environmental Data.'
sidebar_label: 'Environmental sensors data'
slug: /getting-started/example-datasets/environmental-sensors
title: 'Environmental Sensors Data'
doc_type: 'guide'
keywords: ['environmental sensors', 'Sensor.Community', 'air quality data', 'environmental data', 'getting started']
import Image from '@theme/IdealImage';
import no_events_per_day from '@site/static/images/getting-started/example-datasets/sensors_01.png';
import sensors_02 from '@site/static/images/getting-started/example-datasets/sensors_02.png';
Sensor.Community
is a contributors-driven global sensor network that creates Open Environmental Data. The data is collected from sensors all over the globe. Anyone can purchase a sensor and place it wherever they like. The APIs to download the data is in
GitHub
and the data is freely available under the
Database Contents License (DbCL)
.
:::important
The dataset has over 20 billion records, so be careful just copying-and-pasting the commands below unless your resources can handle that type of volume. The commands below were executed on a
Production
instance of
ClickHouse Cloud
.
:::
The data is in S3, so we can use the
s3
table function to create a table from the files. We can also query the data in place. Let's look at a few rows before attempting to insert it into ClickHouse:
sql
SELECT *
FROM s3(
'https://clickhouse-public-datasets.s3.eu-central-1.amazonaws.com/sensors/monthly/2019-06_bmp180.csv.zst',
'CSVWithNames'
)
LIMIT 10
SETTINGS format_csv_delimiter = ';';
The data is in CSV files but uses a semi-colon for the delimiter. The rows look like: | {"source_file": "environmental-sensors.md"} | [
-0.007967624813318253,
0.002928406000137329,
-0.023277495056390762,
0.0385453887283802,
0.12130209803581238,
-0.1168135479092598,
0.03823815658688545,
0.016018306836485863,
-0.019732307642698288,
0.030689025297760963,
0.0985906794667244,
-0.03977949917316437,
0.05598532035946846,
-0.018237... |
4fb839c2-30c2-43e5-a7fd-5afa65dd7168 | The data is in CSV files but uses a semi-colon for the delimiter. The rows look like:
response
ββsensor_idββ¬βsensor_typeββ¬βlocationββ¬ββββlatββ¬ββββlonββ¬βtimestampββββββββββββ¬ββpressureββ¬βaltitudeββ¬βpressure_sealevelββ¬βtemperatureββ
β 9119 β BMP180 β 4594 β 50.994 β 7.126 β 2019-06-01T00:00:00 β 101471 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 19.9 β
β 21210 β BMP180 β 10762 β 42.206 β 25.326 β 2019-06-01T00:00:00 β 99525 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 19.3 β
β 19660 β BMP180 β 9978 β 52.434 β 17.056 β 2019-06-01T00:00:04 β 101570 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 15.3 β
β 12126 β BMP180 β 6126 β 57.908 β 16.49 β 2019-06-01T00:00:05 β 101802.56 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 8.07 β
β 15845 β BMP180 β 8022 β 52.498 β 13.466 β 2019-06-01T00:00:05 β 101878 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 23 β
β 16415 β BMP180 β 8316 β 49.312 β 6.744 β 2019-06-01T00:00:06 β 100176 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 14.7 β
β 7389 β BMP180 β 3735 β 50.136 β 11.062 β 2019-06-01T00:00:06 β 98905 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 12.1 β
β 13199 β BMP180 β 6664 β 52.514 β 13.44 β 2019-06-01T00:00:07 β 101855.54 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 19.74 β
β 12753 β BMP180 β 6440 β 44.616 β 2.032 β 2019-06-01T00:00:07 β 99475 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 17 β
β 16956 β BMP180 β 8594 β 52.052 β 8.354 β 2019-06-01T00:00:08 β 101322 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 17.2 β
βββββββββββββ΄ββββββββββββββ΄βββββββββββ΄βββββββββ΄βββββββββ΄ββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββ
We will use the following
MergeTree
table to store the data in ClickHouse:
sql
CREATE TABLE sensors
(
sensor_id UInt16,
sensor_type Enum('BME280', 'BMP180', 'BMP280', 'DHT22', 'DS18B20', 'HPM', 'HTU21D', 'PMS1003', 'PMS3003', 'PMS5003', 'PMS6003', 'PMS7003', 'PPD42NS', 'SDS011'),
location UInt32,
lat Float32,
lon Float32,
timestamp DateTime,
P1 Float32,
P2 Float32,
P0 Float32,
durP1 Float32,
ratioP1 Float32,
durP2 Float32,
ratioP2 Float32,
pressure Float32,
altitude Float32,
pressure_sealevel Float32,
temperature Float32,
humidity Float32,
date Date MATERIALIZED toDate(timestamp)
)
ENGINE = MergeTree
ORDER BY (timestamp, sensor_id);
ClickHouse Cloud services have a cluster named
default
. We will use the
s3Cluster
table function, which reads S3 files in parallel from the nodes in your cluster. (If you do not have a cluster, just use the
s3
function and remove the cluster name.)
This query will take a while - it's about 1.67T of data uncompressed: | {"source_file": "environmental-sensors.md"} | [
-0.010754355229437351,
0.017192229628562927,
0.01068973634392023,
0.004291664343327284,
0.01733616180717945,
-0.09533410519361496,
-0.006789687089622021,
-0.03823813796043396,
0.023079311475157738,
0.0678306519985199,
0.06450958549976349,
-0.0356740802526474,
-0.03786433860659599,
-0.08025... |
3bf4e315-a0a7-4248-ac1c-14133592c9eb | This query will take a while - it's about 1.67T of data uncompressed:
sql
INSERT INTO sensors
SELECT *
FROM s3Cluster(
'default',
'https://clickhouse-public-datasets.s3.amazonaws.com/sensors/monthly/*.csv.zst',
'CSVWithNames',
$$ sensor_id UInt16,
sensor_type String,
location UInt32,
lat Float32,
lon Float32,
timestamp DateTime,
P1 Float32,
P2 Float32,
P0 Float32,
durP1 Float32,
ratioP1 Float32,
durP2 Float32,
ratioP2 Float32,
pressure Float32,
altitude Float32,
pressure_sealevel Float32,
temperature Float32,
humidity Float32 $$
)
SETTINGS
format_csv_delimiter = ';',
input_format_allow_errors_ratio = '0.5',
input_format_allow_errors_num = 10000,
input_format_parallel_parsing = 0,
date_time_input_format = 'best_effort',
max_insert_threads = 32,
parallel_distributed_insert_select = 1;
Here is the response - showing the number of rows and the speed of processing. It is input at a rate of over 6M rows per second!
response
0 rows in set. Elapsed: 3419.330 sec. Processed 20.69 billion rows, 1.67 TB (6.05 million rows/s., 488.52 MB/s.)
Let's see how much storage disk is needed for the
sensors
table:
sql
SELECT
disk_name,
formatReadableSize(sum(data_compressed_bytes) AS size) AS compressed,
formatReadableSize(sum(data_uncompressed_bytes) AS usize) AS uncompressed,
round(usize / size, 2) AS compr_rate,
sum(rows) AS rows,
count() AS part_count
FROM system.parts
WHERE (active = 1) AND (table = 'sensors')
GROUP BY
disk_name
ORDER BY size DESC;
The 1.67T is compressed down to 310 GiB, and there are 20.69 billion rows:
response
ββdisk_nameββ¬βcompressedββ¬βuncompressedββ¬βcompr_rateββ¬ββββββββrowsββ¬βpart_countββ
β s3disk β 310.21 GiB β 1.30 TiB β 4.29 β 20693971809 β 472 β
βββββββββββββ΄βββββββββββββ΄βββββββββββββββ΄βββββββββββββ΄ββββββββββββββ΄βββββββββββββ
Let's analyze the data now that it's in ClickHouse. Notice the quantity of data increases over time as more sensors are deployed:
sql
SELECT
date,
count()
FROM sensors
GROUP BY date
ORDER BY date ASC;
We can create a chart in the SQL Console to visualize the results:
This query counts the number of overly hot and humid days:
sql
WITH
toYYYYMMDD(timestamp) AS day
SELECT day, count() FROM sensors
WHERE temperature >= 40 AND temperature <= 50 AND humidity >= 90
GROUP BY day
ORDER BY day ASC;
Here's a visualization of the result: | {"source_file": "environmental-sensors.md"} | [
0.02648819424211979,
-0.018099956214427948,
-0.07313337177038193,
0.025117924436926842,
0.08987828344106674,
-0.05396179482340813,
-0.013339253142476082,
-0.05106313154101372,
-0.03070005215704441,
0.025466838851571083,
0.003327783662825823,
-0.14608101546764374,
0.061961136758327484,
-0.1... |
7d2e426f-f5dd-4da1-827e-ec4dda0406df | slug: /guides/developer/alternative-query-languages
sidebar_label: 'Alternative query languages'
title: 'Alternative Query Languages'
description: 'Use alternative query languages in ClickHouse'
keywords: ['alternative query languages', 'query dialects', 'MySQL dialect', 'PostgreSQL dialect', 'developer guide']
doc_type: 'reference'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
Besides standard SQL, ClickHouse supports various alternative query languages for querying data.
The currently supported dialects are:
-
clickhouse
: The default
SQL dialect
of ClickHouse
-
prql
:
Pipelined Relational Query Language (PRQL)
-
kusto
:
Kusto Query Language (KQL)
Which query language is used is controlled by setting
dialect
.
Standard SQL {#standard-sql}
Standard SQL is the default query language of ClickHouse.
sql
SET dialect = 'clickhouse'
Pipelined relational query language (PRQL) {#pipelined-relational-query-language-prql}
To enable PRQL:
sql
SET allow_experimental_prql_dialect = 1; -- this SET statement is required only for ClickHouse versions >= v25.1
SET dialect = 'prql'
Example PRQL query:
prql
from trips
aggregate {
ct = count this
total_days = sum days
}
Under the hood, ClickHouse uses transpilation from PRQL to SQL to run PRQL queries.
Kusto query language (KQL) {#kusto-query-language-kql}
To enable KQL:
sql
SET allow_experimental_kusto_dialect = 1; -- this SET statement is required only for ClickHouse versions >= 25.1
SET dialect = 'kusto'
kql title="Query"
numbers(10) | project number
response title="Response"
ββnumberββ
β 0 β
β 1 β
β 2 β
β 3 β
β 4 β
β 5 β
β 6 β
β 7 β
β 8 β
β 9 β
ββββββββββ
Note that KQL queries may not be able to access all functions defined in ClickHouse. | {"source_file": "alternative-query-languages.md"} | [
0.003374802879989147,
-0.03538310155272484,
-0.006460558623075485,
0.010725421831011772,
-0.17117859423160553,
0.0025061562191694975,
0.03869527578353882,
-0.028233801946043968,
-0.05804893374443054,
-0.047816358506679535,
-0.021636689081788063,
-0.047394782304763794,
0.04585724323987961,
... |
09978dfe-5cee-469c-930e-8291ee0ac90e | slug: /guides/developer/debugging-memory-issues
sidebar_label: 'Debugging memory issues'
sidebar_position: 1
description: 'Queries to help you debug memory issues.'
keywords: ['memory issues']
title: 'Debugging memory issues'
doc_type: 'guide'
Debugging memory issues {#debugging-memory-issues}
When encountering memory issues or a memory leak, knowing what queries and resources are consuming a significant amount of memory is helpful. Below you can find queries that can help you to debug memory issues by finding which queries, databases, and tables can be optimized:
List currently running processes by peak memory usage {#list-currently-running-processes-by-peak-memory}
sql
SELECT
initial_query_id,
query,
elapsed,
formatReadableSize(memory_usage),
formatReadableSize(peak_memory_usage),
FROM system.processes
ORDER BY peak_memory_usage DESC
LIMIT 100;
List metrics for memory usage {#list-metrics-for-memory-usage}
sql
SELECT
metric, description, formatReadableSize(value) size
FROM
system.asynchronous_metrics
WHERE
metric LIKE '%Cach%'
OR metric LIKE '%Mem%'
ORDER BY
value DESC;
List tables by current memory usage {#list-tables-by-current-memory-usage}
sql
SELECT
database,
name,
formatReadableSize(total_bytes)
FROM system.tables
WHERE engine IN ('Memory','Set','Join');
Output total memory used by merges {#output-total-memory-used-by-merges}
sql
SELECT formatReadableSize(sum(memory_usage)) FROM system.merges;
Output total memory used by currently running processes {#output-total-memory-used-by-currently-running-processes}
sql
SELECT formatReadableSize(sum(memory_usage)) FROM system.processes;
Output total memory used by dictionaries {#output-total-memory-used-by-dictionaries}
sql
SELECT formatReadableSize(sum(bytes_allocated)) FROM system.dictionaries;
Output total memory used by primary keys and index granularity {#output-total-memory-used-by-primary-keys}
sql
SELECT
sumIf(data_uncompressed_bytes, part_type = 'InMemory') AS memory_parts,
formatReadableSize(sum(primary_key_bytes_in_memory)) AS primary_key_bytes_in_memory,
formatReadableSize(sum(primary_key_bytes_in_memory_allocated)) AS primary_key_bytes_in_memory_allocated,
formatReadableSize(sum(index_granularity_bytes_in_memory)) AS index_granularity_bytes_in_memory,
formatReadableSize(sum(index_granularity_bytes_in_memory_allocated)) AS index_granularity_bytes_in_memory_allocated
FROM system.parts; | {"source_file": "debugging-memory-issues.md"} | [
0.11287976056337357,
-0.04110516607761383,
-0.08620326966047287,
0.07075314223766327,
-0.017982831224799156,
-0.02420179918408394,
0.038531024008989334,
0.09228302538394928,
-0.09666192531585693,
0.08158181607723236,
-0.03509540110826492,
0.0037684794515371323,
-0.009832006879150867,
-0.04... |
0ad201da-3ff8-4efa-9310-94be982fb4de | slug: /guides/developer/ttl
sidebar_label: 'TTL (Time To Live)'
sidebar_position: 2
keywords: ['ttl', 'time to live', 'clickhouse', 'old', 'data']
description: 'TTL (time-to-live) refers to the capability of having rows or columns moved, deleted, or rolled up after a certain interval of time has passed.'
title: 'Manage Data with TTL (Time-to-live)'
show_related_blogs: true
doc_type: 'guide'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
Manage data with TTL (time-to-live)
Overview of TTL {#overview-of-ttl}
TTL (time-to-live) refers to the capability of having rows or columns moved, deleted, or rolled up after a certain interval of time has passed. While the expression "time-to-live" sounds like it only applies to deleting old data, TTL has several use cases:
Removing old data: no surprise, you can delete rows or columns after a specified time interval
Moving data between disks: after a certain amount of time, you can move data between storage volumes - useful for deploying a hot/warm/cold architecture
Data rollup: rollup your older data into various useful aggregations and computations before deleting it
:::note
TTL can be applied to entire tables or specific columns.
:::
TTL syntax {#ttl-syntax}
The
TTL
clause can appear after a column definition and/or at the end of the table definition. Use the
INTERVAL
clause to define a length of time (which needs to be a
Date
or
DateTime
data type). For example, the following table has two columns
with
TTL
clauses:
sql
CREATE TABLE example1 (
timestamp DateTime,
x UInt32 TTL timestamp + INTERVAL 1 MONTH,
y String TTL timestamp + INTERVAL 1 DAY,
z String
)
ENGINE = MergeTree
ORDER BY tuple()
The x column has a time to live of 1 month from the timestamp column
The y column has a time to live of 1 day from the timestamp column
When the interval lapses, the column expires. ClickHouse replaces the column value with the default value of its data type. If all the column values in the data part expire, ClickHouse deletes this column from the data part in the filesystem.
:::note
TTL rules can be altered or deleted. See the
Manipulations with Table TTL
page for more details.
:::
Triggering TTL events {#triggering-ttl-events}
The deleting or aggregating of expired rows is not immediate - it only occurs during table merges. If you have a table that's not actively merging (for whatever reason), there are two settings that trigger TTL events:
merge_with_ttl_timeout
: the minimum delay in seconds before repeating a merge with delete TTL. The default is 14400 seconds (4 hours).
merge_with_recompression_ttl_timeout
: the minimum delay in seconds before repeating a merge with recompression TTL (rules that roll up data before deleting). Default value: 14400 seconds (4 hours). | {"source_file": "ttl.md"} | [
-0.06457996368408203,
-0.018708348274230957,
-0.0055095055140554905,
0.027328569442033768,
0.04413516819477081,
0.0037431151140481234,
-0.03259635344147682,
-0.07084684073925018,
0.09607066214084625,
0.003206728957593441,
0.0727570503950119,
0.0760730504989624,
0.02966422587633133,
-0.0157... |
8b3d2a23-1a72-4eef-9e90-d163cb14bdb9 | So by default, your TTL rules will be applied to your table at least once every 4 hours. Just modify the settings above if you need your TTL rules applied more frequently.
:::note
Not a great solution (or one that we recommend you use frequently), but you can also force a merge using
OPTIMIZE
:
sql
OPTIMIZE TABLE example1 FINAL
OPTIMIZE
initializes an unscheduled merge of the parts of your table, and
FINAL
forces a reoptimization if your table is already a single part.
:::
Removing rows {#removing-rows}
To remove entire rows from a table after a certain amount of time, define the TTL rule at the table level:
sql
CREATE TABLE customers (
timestamp DateTime,
name String,
balance Int32,
address String
)
ENGINE = MergeTree
ORDER BY timestamp
TTL timestamp + INTERVAL 12 HOUR
Additionally, it is possible to define a TTL rule based on the record's value.
This is easily implemented by specifying a where condition.
Multiple conditions are allowed:
sql
CREATE TABLE events
(
`event` String,
`time` DateTime,
`value` UInt64
)
ENGINE = MergeTree
ORDER BY (event, time)
TTL time + INTERVAL 1 MONTH DELETE WHERE event != 'error',
time + INTERVAL 6 MONTH DELETE WHERE event = 'error'
Removing columns {#removing-columns}
Instead of deleting the entire row, suppose you want just the balance and address columns to expire. Let's modify the
customers
table and add a TTL for both columns to be 2 hours:
sql
ALTER TABLE customers
MODIFY COLUMN balance Int32 TTL timestamp + INTERVAL 2 HOUR,
MODIFY COLUMN address String TTL timestamp + INTERVAL 2 HOUR
Implementing a rollup {#implementing-a-rollup}
Suppose we want to delete rows after a certain amount of time but hang on to some of the data for reporting purposes. We don't want all the details - just a few aggregated results of historical data. This can be implemented by adding a
GROUP BY
clause to your
TTL
expression, along with some columns in your table to store the aggregated results.
Suppose in the following
hits
table we want to delete old rows, but hang on to the sum and maximum of the
hits
columns before removing the rows. We will need a field to store those values in, and we will need to add a
GROUP BY
clause to the
TTL
clause that rolls up the sum and maximum:
sql
CREATE TABLE hits (
timestamp DateTime,
id String,
hits Int32,
max_hits Int32 DEFAULT hits,
sum_hits Int64 DEFAULT hits
)
ENGINE = MergeTree
PRIMARY KEY (id, toStartOfDay(timestamp), timestamp)
TTL timestamp + INTERVAL 1 DAY
GROUP BY id, toStartOfDay(timestamp)
SET
max_hits = max(max_hits),
sum_hits = sum(sum_hits);
Some notes on the
hits
table:
The
GROUP BY
columns in the
TTL
clause must be a prefix of the
PRIMARY KEY
, and we want to group our results by the start of the day. Therefore,
toStartOfDay(timestamp)
was added to the primary key
We added two fields to store the aggregated results:
max_hits
and
sum_hits | {"source_file": "ttl.md"} | [
0.005991352256387472,
0.007635961752384901,
0.04670993238687515,
0.009260912425816059,
-0.12690554559230804,
-0.07763881981372833,
0.041411370038986206,
0.010569493286311626,
0.05593348294496536,
0.008010836318135262,
0.04414895549416542,
0.046550583094358444,
-0.007944476790726185,
-0.016... |
b138fd40-447d-4e2d-9a1e-40da89466378 | We added two fields to store the aggregated results:
max_hits
and
sum_hits
Setting the default value of
max_hits
and
sum_hits
to
hits
is necessary for our logic to work, based on how the
SET
clause is defined
Implementing a hot/warm/cold architecture {#implementing-a-hotwarmcold-architecture}
:::note
If you are using ClickHouse Cloud, the steps in the lesson are not applicable. You do not need to worry about moving old data around in ClickHouse Cloud.
:::
A common practice when working with large amounts of data is to move that data around as it gets older. Here are the steps for implementing a hot/warm/cold architecture in ClickHouse using the
TO DISK
and
TO VOLUME
clauses of the
TTL
command. (By the way, it doesn't have to be a hot and cold thing - you can use TTL to move data around for whatever use case you have.)
The
TO DISK
and
TO VOLUME
options refer to the names of disks or volumes defined in your ClickHouse configuration files. Create a new file named
my_system.xml
(or any file name) that defines your disks, then define volumes that use your disks. Place the XML file in
/etc/clickhouse-server/config.d/
to have the configuration applied to your system:
xml
<clickhouse>
<storage_configuration>
<disks>
<default>
</default>
<hot_disk>
<path>./hot/</path>
</hot_disk>
<warm_disk>
<path>./warm/</path>
</warm_disk>
<cold_disk>
<path>./cold/</path>
</cold_disk>
</disks>
<policies>
<default>
<volumes>
<default>
<disk>default</disk>
</default>
<hot_volume>
<disk>hot_disk</disk>
</hot_volume>
<warm_volume>
<disk>warm_disk</disk>
</warm_volume>
<cold_volume>
<disk>cold_disk</disk>
</cold_volume>
</volumes>
</default>
</policies>
</storage_configuration>
</clickhouse>
The configuration above refers to three disks that point to folders that ClickHouse can read from and write to. Volumes can contain one or more disks - we defined a volume for each of the three disks. Let's view the disks:
sql
SELECT name, path, free_space, total_space
FROM system.disks
response
ββnameβββββββββ¬βpathββββββββββββ¬βββfree_spaceββ¬ββtotal_spaceββ
β cold_disk β ./data/cold/ β 179143311360 β 494384795648 β
β default β ./ β 179143311360 β 494384795648 β
β hot_disk β ./data/hot/ β 179143311360 β 494384795648 β
β warm_disk β ./data/warm/ β 179143311360 β 494384795648 β
βββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
And...let's verify the volumes: | {"source_file": "ttl.md"} | [
-0.024277303367853165,
-0.03218269720673561,
-0.0014165060129016638,
0.05640473589301109,
-0.07523021847009659,
-0.013632779009640217,
-0.03207400068640709,
0.0011422935640439391,
-0.023561010137200356,
0.03918449580669403,
0.014830865897238255,
-0.026735585182905197,
0.09558729827404022,
... |
8b016616-9264-4183-a9ac-cb88364429a7 | And...let's verify the volumes:
sql
SELECT
volume_name,
disks
FROM system.storage_policies
response
ββvolume_nameββ¬βdisksββββββββββ
β default β ['default'] β
β hot_volume β ['hot_disk'] β
β warm_volume β ['warm_disk'] β
β cold_volume β ['cold_disk'] β
βββββββββββββββ΄ββββββββββββββββ
Now we will add a
TTL
rule that moves the data between the hot, warm and cold volumes:
sql
ALTER TABLE my_table
MODIFY TTL
trade_date TO VOLUME 'hot_volume',
trade_date + INTERVAL 2 YEAR TO VOLUME 'warm_volume',
trade_date + INTERVAL 4 YEAR TO VOLUME 'cold_volume';
The new
TTL
rule should materialize, but you can force it to make sure:
sql
ALTER TABLE my_table
MATERIALIZE TTL
Verify your data has moved to its expected disks using the
system.parts
table:
```sql
Using the system.parts table, view which disks the parts are on for the crypto_prices table:
SELECT
name,
disk_name
FROM system.parts
WHERE (table = 'my_table') AND (active = 1)
```
The response will look like:
response
ββnameβββββββββ¬βdisk_nameββ
β all_1_3_1_5 β warm_disk β
β all_2_2_0 β hot_disk β
βββββββββββββββ΄ββββββββββββ | {"source_file": "ttl.md"} | [
0.0033359741792082787,
-0.0069966441951692104,
0.0005188024369999766,
0.060990579426288605,
-0.026290997862815857,
-0.07892069965600967,
0.02105298638343811,
-0.011732149869203568,
-0.02848009392619133,
0.01839235983788967,
0.017816629260778427,
-0.05535617470741272,
0.10718882828950882,
0... |
17cd62a4-68a1-492d-9d2d-d528c8e919ae | sidebar_label: 'Stored procedures & query parameters'
sidebar_position: 19
keywords: ['clickhouse', 'stored procedures', 'prepared statements', 'query parameters', 'UDF', 'parameterized views']
description: 'Guide on stored procedures, prepared statements, and query parameters in ClickHouse'
slug: /guides/developer/stored-procedures-and-prepared-statements
title: 'Stored Procedures and Query Parameters'
doc_type: 'guide'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Stored procedures and query parameters in ClickHouse
If you're coming from a traditional relational database, you may be looking for stored procedures and prepared statements in ClickHouse.
This guide explains ClickHouse's approach to these concepts and provides recommended alternatives.
Alternatives to stored procedures in ClickHouse {#alternatives-to-stored-procedures}
ClickHouse does not support traditional stored procedures with control flow logic (
IF
/
ELSE
, loops, etc.).
This is an intentional design decision based on ClickHouse's architecture as an analytical database.
Loops are discouraged for analytical databases because processing O(n) simple queries is usually slower than processing fewer complex queries.
ClickHouse is optimized for:
-
Analytical workloads
- Complex aggregations over large datasets
-
Batch processing
- Handling large data volumes efficiently
-
Declarative queries
- SQL queries that describe what data to retrieve, not how to process it
Stored procedures with procedural logic work against these optimizations. Instead, ClickHouse provides alternatives that align with its strengths.
User-Defined Functions (UDFs) {#user-defined-functions}
User-Defined Functions let you encapsulate reusable logic without control flow. ClickHouse supports two types:
Lambda-based UDFs {#lambda-based-udfs}
Create functions using SQL expressions and lambda syntax:
Sample data for examples
```sql
-- Create the products table
CREATE TABLE products (
product_id UInt32,
product_name String,
price Decimal(10, 2)
)
ENGINE = MergeTree()
ORDER BY product_id;
-- Insert sample data
INSERT INTO products (product_id, product_name, price) VALUES
(1, 'Laptop', 899.99),
(2, 'Wireless Mouse', 24.99),
(3, 'USB-C Cable', 12.50),
(4, 'Monitor', 299.00),
(5, 'Keyboard', 79.99),
(6, 'Webcam', 54.95),
(7, 'Desk Lamp', 34.99),
(8, 'External Hard Drive', 119.99),
(9, 'Headphones', 149.00),
(10, 'Phone Stand', 15.99);
```
```sql
-- Simple calculation function
CREATE FUNCTION calculate_tax AS (price, rate) -> price * rate;
SELECT
product_name,
price,
calculate_tax(price, 0.08) AS tax
FROM products;
```
```sql
-- Conditional logic using if()
CREATE FUNCTION price_tier AS (price) ->
if(price < 100, 'Budget',
if(price < 500, 'Mid-range', 'Premium'));
SELECT
product_name,
price,
price_tier(price) AS tier
FROM products;
``` | {"source_file": "stored-procedures-and-prepared-statements.md"} | [
0.0331437885761261,
-0.013630622997879982,
-0.1268039345741272,
0.05200204253196716,
-0.07677546888589859,
-0.00827089510858059,
0.03437928482890129,
0.08433075994253159,
-0.11341986805200577,
-0.0016192594775930047,
0.03948166221380234,
0.008352609351277351,
0.10203895717859268,
-0.092744... |
6dfec880-33aa-4b90-b894-8f1604a24692 | SELECT
product_name,
price,
price_tier(price) AS tier
FROM products;
```
```sql
-- String manipulation
CREATE FUNCTION format_phone AS (phone) ->
concat('(', substring(phone, 1, 3), ') ',
substring(phone, 4, 3), '-',
substring(phone, 7, 4));
SELECT format_phone('5551234567');
-- Result: (555) 123-4567
```
Limitations:
- No loops or complex control flow
- Cannot modify data (
INSERT
/
UPDATE
/
DELETE
)
- Recursive functions not allowed
See
CREATE FUNCTION
for complete syntax.
Executable UDFs {#executable-udfs}
For more complex logic, use executable UDFs that call external programs:
```xml
executable
sentiment_score
Float32
String
TabSeparated
python3 /opt/scripts/sentiment.py
```
sql
-- Use the executable UDF
SELECT
review_text,
sentiment_score(review_text) AS score
FROM customer_reviews;
Executable UDFs can implement arbitrary logic in any language (Python, Node.js, Go, etc.).
See
Executable UDFs
for details.
Parameterized views {#parameterized-views}
Parameterized views act like functions that return datasets.
They're ideal for reusable queries with dynamic filtering:
Sample data for example
```sql
-- Create the sales table
CREATE TABLE sales (
date Date,
product_id UInt32,
product_name String,
category String,
quantity UInt32,
revenue Decimal(10, 2),
sales_amount Decimal(10, 2)
)
ENGINE = MergeTree()
ORDER BY (date, product_id);
-- Insert sample data
INSERT INTO sales VALUES
('2024-01-05', 12345, 'Laptop Pro', 'Electronics', 2, 1799.98, 1799.98),
('2024-01-06', 12345, 'Laptop Pro', 'Electronics', 1, 899.99, 899.99),
('2024-01-10', 12346, 'Wireless Mouse', 'Electronics', 5, 124.95, 124.95),
('2024-01-15', 12347, 'USB-C Cable', 'Accessories', 10, 125.00, 125.00),
('2024-01-20', 12345, 'Laptop Pro', 'Electronics', 3, 2699.97, 2699.97),
('2024-01-25', 12348, 'Monitor 4K', 'Electronics', 2, 598.00, 598.00),
('2024-02-01', 12345, 'Laptop Pro', 'Electronics', 1, 899.99, 899.99),
('2024-02-05', 12349, 'Keyboard Mechanical', 'Accessories', 4, 319.96, 319.96),
('2024-02-10', 12346, 'Wireless Mouse', 'Electronics', 8, 199.92, 199.92),
('2024-02-15', 12350, 'Webcam HD', 'Electronics', 3, 164.85, 164.85);
```
sql
-- Create a parameterized view
CREATE VIEW sales_by_date AS
SELECT
date,
product_id,
sum(quantity) AS total_quantity,
sum(revenue) AS total_revenue
FROM sales
WHERE date BETWEEN {start_date:Date} AND {end_date:Date}
GROUP BY date, product_id;
sql
-- Query the view with parameters
SELECT *
FROM sales_by_date(start_date='2024-01-01', end_date='2024-01-31')
WHERE product_id = 12345;
Common use cases {#common-use-cases}
Dynamic date range filtering
User-specific data slicing
Multi-tenant data access
Report templates
Data masking | {"source_file": "stored-procedures-and-prepared-statements.md"} | [
-0.0447407104074955,
-0.009576643817126751,
-0.06945580989122391,
0.0021314783953130245,
-0.10621008276939392,
-0.05628172680735588,
0.06420671939849854,
0.11184248328208923,
-0.03378724679350853,
0.006366100627928972,
0.0921124815940857,
-0.054712437093257904,
0.10073798149824142,
-0.0673... |
046ced20-60d1-4579-abf5-039b92f160fb | Common use cases {#common-use-cases}
Dynamic date range filtering
User-specific data slicing
Multi-tenant data access
Report templates
Data masking
```sql
-- More complex parameterized view
CREATE VIEW top_products_by_category AS
SELECT
category,
product_name,
revenue,
rank
FROM (
SELECT
category,
product_name,
revenue,
rank() OVER (PARTITION BY category ORDER BY revenue DESC) AS rank
FROM (
SELECT
category,
product_name,
sum(sales_amount) AS revenue
FROM sales
WHERE category = {category:String}
AND date >= {min_date:Date}
GROUP BY category, product_name
)
)
WHERE rank <= {top_n:UInt32};
-- Use it
SELECT * FROM top_products_by_category(
category='Electronics',
min_date='2024-01-01',
top_n=10
);
```
See the
Parameterized Views
section for more information.
Materialized views {#materialized-views}
Materialized views are ideal for pre-computing expensive aggregations that would traditionally be done in stored procedures. If you're coming from a traditional database, think of a materialized view as an
INSERT trigger
that automatically transforms and aggregates data as it's inserted into the source table:
```sql
-- Source table
CREATE TABLE page_views (
user_id UInt64,
page String,
timestamp DateTime,
session_id String
)
ENGINE = MergeTree()
ORDER BY (user_id, timestamp);
-- Materialized view that maintains aggregated statistics
CREATE MATERIALIZED VIEW daily_user_stats
ENGINE = SummingMergeTree()
ORDER BY (date, user_id)
AS SELECT
toDate(timestamp) AS date,
user_id,
count() AS page_views,
uniq(session_id) AS sessions,
uniq(page) AS unique_pages
FROM page_views
GROUP BY date, user_id;
-- Insert sample data into source table
INSERT INTO page_views VALUES
(101, '/home', '2024-01-15 10:00:00', 'session_a1'),
(101, '/products', '2024-01-15 10:05:00', 'session_a1'),
(101, '/checkout', '2024-01-15 10:10:00', 'session_a1'),
(102, '/home', '2024-01-15 11:00:00', 'session_b1'),
(102, '/about', '2024-01-15 11:05:00', 'session_b1'),
(101, '/home', '2024-01-16 09:00:00', 'session_a2'),
(101, '/products', '2024-01-16 09:15:00', 'session_a2'),
(103, '/home', '2024-01-16 14:00:00', 'session_c1'),
(103, '/products', '2024-01-16 14:05:00', 'session_c1'),
(103, '/products', '2024-01-16 14:10:00', 'session_c1'),
(102, '/home', '2024-01-17 10:30:00', 'session_b2'),
(102, '/contact', '2024-01-17 10:35:00', 'session_b2');
-- Query pre-aggregated data
SELECT
user_id,
sum(page_views) AS total_views,
sum(sessions) AS total_sessions
FROM daily_user_stats
WHERE date BETWEEN '2024-01-01' AND '2024-01-31'
GROUP BY user_id;
```
Refreshable materialized views {#refreshable-materialized-views}
For scheduled batch processing (like nightly stored procedures): | {"source_file": "stored-procedures-and-prepared-statements.md"} | [
0.028773874044418335,
0.05630745738744736,
-0.033204007893800735,
0.014274084940552711,
-0.08017648756504059,
0.05124024674296379,
0.015619806945323944,
0.10125456005334854,
-0.05793217569589615,
-0.0381852351129055,
-0.0020814789459109306,
-0.04597287252545357,
0.07928628474473953,
-0.013... |
5eeab68c-6794-4820-84b9-efa71d8d12b6 | Refreshable materialized views {#refreshable-materialized-views}
For scheduled batch processing (like nightly stored procedures):
```sql
-- Automatically refresh every day at 2 AM
CREATE MATERIALIZED VIEW monthly_sales_report
REFRESH EVERY 1 DAY OFFSET 2 HOUR
AS SELECT
toStartOfMonth(order_date) AS month,
region,
product_category,
count() AS order_count,
sum(amount) AS total_revenue,
avg(amount) AS avg_order_value
FROM orders
WHERE order_date >= today() - INTERVAL 13 MONTH
GROUP BY month, region, product_category;
-- Query always has fresh data
SELECT * FROM monthly_sales_report
WHERE month = toStartOfMonth(today());
```
See
Cascading Materialized Views
for advanced patterns.
External orchestration {#external-orchestration}
For complex business logic, ETL workflows, or multi-step processes, it's always possible to implement logic outside ClickHouse,
using language clients.
Using application code {#using-application-code}
Here's a side-by-side comparison showing how a MySQL stored procedure translates to application code with ClickHouse:
```sql
DELIMITER $$
CREATE PROCEDURE process_order(
IN p_order_id INT,
IN p_customer_id INT,
IN p_order_total DECIMAL(10,2),
OUT p_status VARCHAR(50),
OUT p_loyalty_points INT
)
BEGIN
DECLARE v_customer_tier VARCHAR(20);
DECLARE v_previous_orders INT;
DECLARE v_discount DECIMAL(10,2);
-- Start transaction
START TRANSACTION;
-- Get customer information
SELECT tier, total_orders
INTO v_customer_tier, v_previous_orders
FROM customers
WHERE customer_id = p_customer_id;
-- Calculate discount based on tier
IF v_customer_tier = 'gold' THEN
SET v_discount = p_order_total * 0.15;
ELSEIF v_customer_tier = 'silver' THEN
SET v_discount = p_order_total * 0.10;
ELSE
SET v_discount = 0;
END IF;
-- Insert order record
INSERT INTO orders (order_id, customer_id, order_total, discount, final_amount)
VALUES (p_order_id, p_customer_id, p_order_total, v_discount,
p_order_total - v_discount);
-- Update customer statistics
UPDATE customers
SET total_orders = total_orders + 1,
lifetime_value = lifetime_value + (p_order_total - v_discount),
last_order_date = NOW()
WHERE customer_id = p_customer_id;
-- Calculate loyalty points (1 point per dollar)
SET p_loyalty_points = FLOOR(p_order_total - v_discount);
-- Insert loyalty points transaction
INSERT INTO loyalty_points (customer_id, points, transaction_date, description)
VALUES (p_customer_id, p_loyalty_points, NOW(),
CONCAT('Order #', p_order_id)); | {"source_file": "stored-procedures-and-prepared-statements.md"} | [
-0.049442630261182785,
-0.035570766776800156,
-0.10135946422815323,
0.04151482880115509,
-0.12559647858142853,
-0.022820688784122467,
-0.016717396676540375,
0.014650789089500904,
-0.024031829088926315,
0.05902915820479393,
-0.014740632846951485,
0.019023440778255463,
0.06320346146821976,
-... |
96111e3a-fd3b-442c-a702-ef495d1f576b | -- Check if customer should be upgraded
IF v_previous_orders + 1 >= 10 AND v_customer_tier = 'bronze' THEN
UPDATE customers SET tier = 'silver' WHERE customer_id = p_customer_id;
SET p_status = 'ORDER_COMPLETE_TIER_UPGRADED_SILVER';
ELSEIF v_previous_orders + 1 >= 50 AND v_customer_tier = 'silver' THEN
UPDATE customers SET tier = 'gold' WHERE customer_id = p_customer_id;
SET p_status = 'ORDER_COMPLETE_TIER_UPGRADED_GOLD';
ELSE
SET p_status = 'ORDER_COMPLETE';
END IF;
COMMIT;
END$$
DELIMITER ;
-- Call the stored procedure
CALL process_order(12345, 5678, 250.00, @status, @points);
SELECT @status, @points;
```
:::note Query parameters
The example below uses query parameters in ClickHouse.
Skip ahead to
"Alternatives to prepared statements in ClickHouse"
if you are not yet familiar with query parameters in ClickHouse.
:::
```python
Python example using clickhouse-connect
import clickhouse_connect
from datetime import datetime
from decimal import Decimal
client = clickhouse_connect.get_client(host='localhost')
def process_order(order_id: int, customer_id: int, order_total: Decimal) -> tuple[str, int]:
"""
Processes an order with business logic that would be in a stored procedure.
Returns: (status_message, loyalty_points)
Note: ClickHouse is optimized for analytics, not OLTP transactions.
For transactional workloads, use an OLTP database (PostgreSQL, MySQL)
and sync analytics data to ClickHouse for reporting.
"""
# Step 1: Get customer information
result = client.query(
"""
SELECT tier, total_orders
FROM customers
WHERE customer_id = {cid: UInt32}
""",
parameters={'cid': customer_id}
)
if not result.result_rows:
raise ValueError(f"Customer {customer_id} not found")
customer_tier, previous_orders = result.result_rows[0]
# Step 2: Calculate discount based on tier (business logic in Python)
discount_rates = {'gold': 0.15, 'silver': 0.10, 'bronze': 0.0}
discount = order_total * Decimal(str(discount_rates.get(customer_tier, 0.0)))
final_amount = order_total - discount
# Step 3: Insert order record
client.command(
"""
INSERT INTO orders (order_id, customer_id, order_total, discount,
final_amount, order_date)
VALUES ({oid: UInt32}, {cid: UInt32}, {total: Decimal64(2)},
{disc: Decimal64(2)}, {final: Decimal64(2)}, now())
""",
parameters={
'oid': order_id,
'cid': customer_id,
'total': float(order_total),
'disc': float(discount),
'final': float(final_amount)
}
)
# Step 4: Calculate new customer statistics
new_order_count = previous_orders + 1 | {"source_file": "stored-procedures-and-prepared-statements.md"} | [
-0.05911429598927498,
0.02258319966495037,
-0.0057139042764902115,
-0.0098105538636446,
-0.15021884441375732,
-0.008565425872802734,
0.030072836205363274,
0.01822059601545334,
-0.04708941653370857,
0.030750103294849396,
0.10068181902170181,
-0.0342581532895565,
0.06720725446939468,
-0.1137... |
779d4fa8-15f3-44de-896b-b188c91de856 | # Step 4: Calculate new customer statistics
new_order_count = previous_orders + 1
# For analytics databases, prefer INSERT over UPDATE
# This uses a ReplacingMergeTree pattern
client.command(
"""
INSERT INTO customers (customer_id, tier, total_orders, last_order_date,
update_time)
SELECT
customer_id,
tier,
{new_count: UInt32} AS total_orders,
now() AS last_order_date,
now() AS update_time
FROM customers
WHERE customer_id = {cid: UInt32}
""",
parameters={'cid': customer_id, 'new_count': new_order_count}
)
# Step 5: Calculate and record loyalty points
loyalty_points = int(final_amount)
client.command(
"""
INSERT INTO loyalty_points (customer_id, points, transaction_date, description)
VALUES ({cid: UInt32}, {pts: Int32}, now(),
{desc: String})
""",
parameters={
'cid': customer_id,
'pts': loyalty_points,
'desc': f'Order #{order_id}'
}
)
# Step 6: Check for tier upgrade (business logic in Python)
status = 'ORDER_COMPLETE'
if new_order_count >= 10 and customer_tier == 'bronze':
# Upgrade to silver
client.command(
"""
INSERT INTO customers (customer_id, tier, total_orders, last_order_date,
update_time)
SELECT
customer_id, 'silver' AS tier, total_orders, last_order_date,
now() AS update_time
FROM customers
WHERE customer_id = {cid: UInt32}
""",
parameters={'cid': customer_id}
)
status = 'ORDER_COMPLETE_TIER_UPGRADED_SILVER'
elif new_order_count >= 50 and customer_tier == 'silver':
# Upgrade to gold
client.command(
"""
INSERT INTO customers (customer_id, tier, total_orders, last_order_date,
update_time)
SELECT
customer_id, 'gold' AS tier, total_orders, last_order_date,
now() AS update_time
FROM customers
WHERE customer_id = {cid: UInt32}
""",
parameters={'cid': customer_id}
)
status = 'ORDER_COMPLETE_TIER_UPGRADED_GOLD'
return status, loyalty_points
Use the function
status, points = process_order(
order_id=12345,
customer_id=5678,
order_total=Decimal('250.00')
)
print(f"Status: {status}, Loyalty Points: {points}")
```
Key differences {#key-differences}
Control flow
- MySQL stored procedure uses
IF/ELSE
,
WHILE
loops. In ClickHouse, implement this logic in your application code (Python, Java, etc.)
Transactions
- MySQL supports
BEGIN/COMMIT/ROLLBACK
for ACID transactions. ClickHouse is an analytical database optimized for append-only workloads, not transactional updates
Updates
- MySQL uses
UPDATE
statements. ClickHouse prefers
INSERT
with
ReplacingMergeTree
or
CollapsingMergeTree
for mutable data | {"source_file": "stored-procedures-and-prepared-statements.md"} | [
-0.09665478020906448,
0.06119813397526741,
-0.01844063773751259,
0.05251779034733772,
-0.15836729109287262,
0.022460296750068665,
0.022527508437633514,
0.034342776983976364,
-0.003367453580722213,
0.01923614740371704,
0.10467489808797836,
-0.025192467495799065,
0.112361378967762,
-0.073513... |
18bd9340-019b-412d-a48f-34c3e082a76f | Updates
- MySQL uses
UPDATE
statements. ClickHouse prefers
INSERT
with
ReplacingMergeTree
or
CollapsingMergeTree
for mutable data
Variables and state
- MySQL stored procedures can declare variables (
DECLARE v_discount
). With ClickHouse, manage state in your application code
Error handling
- MySQL supports
SIGNAL
and exception handlers. In application code, use your language's native error handling (try/catch)
:::tip
When to use each approach:
-
OLTP workloads
(orders, payments, user accounts) β Use MySQL/PostgreSQL with stored procedures
-
Analytics workloads
(reporting, aggregations, time-series) β Use ClickHouse with application orchestration
-
Hybrid architecture
β Use both! Stream transactional data from OLTP to ClickHouse for analytics
:::
Using workflow orchestration tools {#using-workflow-orchestration-tools}
Apache Airflow
- Schedule and monitor complex DAGs of ClickHouse queries
dbt
- Transform data with SQL-based workflows
Prefect/Dagster
- Modern Python-based orchestration
Custom schedulers
- Cron jobs, Kubernetes CronJobs, etc.
Benefits of external orchestration:
- Full programming language capabilities
- Better error handling and retry logic
- Integration with external systems (APIs, other databases)
- Version control and testing
- Monitoring and alerting
- More flexible scheduling
Alternatives to prepared statements in ClickHouse {#alternatives-to-prepared-statements-in-clickhouse}
While ClickHouse doesn't have traditional "prepared statements" in the RDBMS sense, it provides
query parameters
that serve the same purpose: safe, parameterized queries that prevent SQL injection.
Syntax {#query-parameters-syntax}
There are two ways to define query parameters:
Method 1: using
SET
{#method-1-using-set}
Example table and data
```sql
-- Create the user_events table (ClickHouse syntax)
CREATE TABLE user_events (
event_id UInt32,
user_id UInt64,
event_name String,
event_date Date,
event_timestamp DateTime
) ENGINE = MergeTree()
ORDER BY (user_id, event_date); | {"source_file": "stored-procedures-and-prepared-statements.md"} | [
-0.013298062607645988,
-0.010578842833638191,
-0.0349380262196064,
0.04048348218202591,
-0.10898742079734802,
-0.081621453166008,
0.05884135514497757,
0.04994037747383118,
-0.0215937327593565,
0.03174961730837822,
-0.0032627535983920097,
-0.006232106126844883,
0.08571578562259674,
-0.10886... |
3266b6cd-9d12-4ff4-8a1e-9a4c71cffe77 | -- Insert sample data for multiple users and events
INSERT INTO user_events (event_id, user_id, event_name, event_date, event_timestamp) VALUES
(1, 12345, 'page_view', '2024-01-05', '2024-01-05 10:30:00'),
(2, 12345, 'page_view', '2024-01-05', '2024-01-05 10:35:00'),
(3, 12345, 'add_to_cart', '2024-01-05', '2024-01-05 10:40:00'),
(4, 12345, 'page_view', '2024-01-10', '2024-01-10 14:20:00'),
(5, 12345, 'add_to_cart', '2024-01-10', '2024-01-10 14:25:00'),
(6, 12345, 'purchase', '2024-01-10', '2024-01-10 14:30:00'),
(7, 12345, 'page_view', '2024-01-15', '2024-01-15 09:15:00'),
(8, 12345, 'page_view', '2024-01-15', '2024-01-15 09:20:00'),
(9, 12345, 'page_view', '2024-01-20', '2024-01-20 16:45:00'),
(10, 12345, 'add_to_cart', '2024-01-20', '2024-01-20 16:50:00'),
(11, 12345, 'purchase', '2024-01-25', '2024-01-25 11:10:00'),
(12, 12345, 'page_view', '2024-01-28', '2024-01-28 13:30:00'),
(13, 67890, 'page_view', '2024-01-05', '2024-01-05 11:00:00'),
(14, 67890, 'add_to_cart', '2024-01-05', '2024-01-05 11:05:00'),
(15, 67890, 'purchase', '2024-01-05', '2024-01-05 11:10:00'),
(16, 12345, 'page_view', '2024-02-01', '2024-02-01 10:00:00'),
(17, 12345, 'add_to_cart', '2024-02-01', '2024-02-01 10:05:00');
```
```sql
SET param_user_id = 12345;
SET param_start_date = '2024-01-01';
SET param_end_date = '2024-01-31';
SELECT
event_name,
count() AS event_count
FROM user_events
WHERE user_id = {user_id: UInt64}
AND event_date BETWEEN {start_date: Date} AND {end_date: Date}
GROUP BY event_name;
```
Method 2: using CLI parameters {#method-2-using-cli-parameters}
bash
clickhouse-client \
--param_user_id=12345 \
--param_start_date='2024-01-01' \
--param_end_date='2024-01-31' \
--query="SELECT count() FROM user_events
WHERE user_id = {user_id: UInt64}
AND event_date BETWEEN {start_date: Date} AND {end_date: Date}"
Parameter syntax {#parameter-syntax}
Parameters are referenced using:
{parameter_name: DataType}
parameter_name
- The name of the parameter (without the
param_
prefix)
DataType
- The ClickHouse data type to cast the parameter to
Data type examples {#data-type-examples}
Tables and sample data for example
```sql
-- 1. Create a table for string and number tests
CREATE TABLE IF NOT EXISTS users (
name String,
age UInt8,
salary Float64
) ENGINE = Memory;
INSERT INTO users VALUES
('John Doe', 25, 75000.50),
('Jane Smith', 30, 85000.75),
('Peter Jones', 20, 50000.00);
-- 2. Create a table for date and timestamp tests
CREATE TABLE IF NOT EXISTS events (
event_date Date,
event_timestamp DateTime
) ENGINE = Memory;
INSERT INTO events VALUES
('2024-01-15', '2024-01-15 14:30:00'),
('2024-01-15', '2024-01-15 15:00:00'),
('2024-01-16', '2024-01-16 10:00:00');
-- 3. Create a table for array tests
CREATE TABLE IF NOT EXISTS products (
id UInt32,
name String
) ENGINE = Memory; | {"source_file": "stored-procedures-and-prepared-statements.md"} | [
0.004556548316031694,
-0.016244987025856972,
0.002418385585770011,
0.05017499998211861,
-0.05695350468158722,
0.012518391013145447,
0.054073795676231384,
-0.009437035769224167,
-0.010574453510344028,
-0.019801462069153786,
0.08401068300008774,
-0.04333430528640747,
-0.05240783840417862,
-0... |
e3bdde25-ac95-4c51-a927-cc35193bf8b0 | -- 3. Create a table for array tests
CREATE TABLE IF NOT EXISTS products (
id UInt32,
name String
) ENGINE = Memory;
INSERT INTO products VALUES (1, 'Laptop'), (2, 'Monitor'), (3, 'Mouse'), (4, 'Keyboard');
-- 4. Create a table for Map (struct-like) tests
CREATE TABLE IF NOT EXISTS accounts (
user_id UInt32,
status String,
type String
) ENGINE = Memory;
INSERT INTO accounts VALUES
(101, 'active', 'premium'),
(102, 'inactive', 'basic'),
(103, 'active', 'basic');
-- 5. Create a table for Identifier tests
CREATE TABLE IF NOT EXISTS sales_2024 (
value UInt32
) ENGINE = Memory;
INSERT INTO sales_2024 VALUES (100), (200), (300);
```
```sql
SET param_name = 'John Doe';
SET param_age = 25;
SET param_salary = 75000.50;
SELECT name, age, salary FROM users
WHERE name = {name: String}
AND age >= {age: UInt8}
AND salary <= {salary: Float64};
```
```sql
SET param_date = '2024-01-15';
SET param_timestamp = '2024-01-15 14:30:00';
SELECT * FROM events
WHERE event_date = {date: Date}
OR event_timestamp > {timestamp: DateTime};
```
```sql
SET param_ids = [1, 2, 3, 4, 5];
SELECT * FROM products WHERE id IN {ids: Array(UInt32)};
```
```sql
SET param_filters = {'target_status': 'active'};
SELECT user_id, status, type FROM accounts
WHERE status = arrayElement(
mapValues({filters: Map(String, String)}),
indexOf(mapKeys({filters: Map(String, String)}), 'target_status')
);
```
```sql
SET param_table = 'sales_2024';
SELECT count() FROM {table: Identifier};
```
For use of query parameters in
language clients
, refer to the documentation for
the specific language client you are interested in.
Limitations of query parameters {#limitations-of-query-parameters}
Query parameters are
not general text substitutions
. They have specific limitations:
They are
primarily intended for SELECT statements
- the best support is in SELECT queries
They
work as identifiers or literals
- they cannot substitute arbitrary SQL fragments
They have
limited DDL support
- they are supported in
CREATE TABLE
, but not in
ALTER TABLE
What WORKS:
```sql
-- β Values in WHERE clause
SELECT * FROM users WHERE id = {user_id: UInt64};
-- β Table/database names
SELECT * FROM {db: Identifier}.{table: Identifier};
-- β Values in IN clause
SELECT * FROM products WHERE id IN {ids: Array(UInt32)};
-- β CREATE TABLE
CREATE TABLE {table_name: Identifier} (id UInt64, name String) ENGINE = MergeTree() ORDER BY id;
```
What DOESN'T work:
```sql
-- β Column names in SELECT (use Identifier carefully)
SELECT {column: Identifier} FROM users; -- Limited support
-- β Arbitrary SQL fragments
SELECT * FROM users {where_clause: String}; -- NOT SUPPORTED
-- β ALTER TABLE statements
ALTER TABLE {table: Identifier} ADD COLUMN new_col String; -- NOT SUPPORTED
-- β Multiple statements
{statements: String}; -- NOT SUPPORTED
```
Security best practices {#security-best-practices} | {"source_file": "stored-procedures-and-prepared-statements.md"} | [
0.07529185712337494,
0.10615254938602448,
-0.07726244628429413,
-0.02746420167386532,
-0.10343825072050095,
0.011064799502491951,
0.07234880328178406,
0.058129165321588516,
-0.14860579371452332,
0.0224987231194973,
0.09673674404621124,
-0.11198222637176514,
0.0799669623374939,
-0.067442551... |
e66e780d-a32e-4775-b00f-1665009a4bc4 | -- β Multiple statements
{statements: String}; -- NOT SUPPORTED
```
Security best practices {#security-best-practices}
Always use query parameters for user input:
```python
β SAFE - Uses parameters
user_input = request.get('user_id')
result = client.query(
"SELECT * FROM orders WHERE user_id = {uid: UInt64}",
parameters={'uid': user_input}
)
β DANGEROUS - SQL injection risk!
user_input = request.get('user_id')
result = client.query(f"SELECT * FROM orders WHERE user_id = {user_input}")
```
Validate input types:
```python
def get_user_orders(user_id: int, start_date: str):
# Validate types before querying
if not isinstance(user_id, int) or user_id <= 0:
raise ValueError("Invalid user_id")
# Parameters enforce type safety
return client.query(
"""
SELECT * FROM orders
WHERE user_id = {uid: UInt64}
AND order_date >= {start: Date}
""",
parameters={'uid': user_id, 'start': start_date}
)
```
MySQL protocol prepared statements {#mysql-protocol-prepared-statements}
ClickHouse's
MySQL interface
includes minimal support for prepared statements (
COM_STMT_PREPARE
,
COM_STMT_EXECUTE
,
COM_STMT_CLOSE
), primarily to enable connectivity with tools like Tableau Online that wrap queries in prepared statements.
Key limitations:
Parameter binding is not supported
- You cannot use
?
placeholders with bound parameters
Queries are stored but not parsed during
PREPARE
Implementation is minimal and designed for specific BI tool compatibility
Example of what does NOT work:
sql
-- This MySQL-style prepared statement with parameters does NOT work in ClickHouse
PREPARE stmt FROM 'SELECT * FROM users WHERE id = ?';
EXECUTE stmt USING @user_id; -- Parameter binding not supported
:::tip
Use ClickHouse's native query parameters instead.
They provide full parameter binding support, type safety, and SQL injection prevention across all ClickHouse interfaces:
sql
-- ClickHouse native query parameters (recommended)
SET param_user_id = 12345;
SELECT * FROM users WHERE id = {user_id: UInt64};
:::
For more details, see the
MySQL Interface documentation
and the
blog post on MySQL support
.
Summary {#summary}
ClickHouse alternatives to stored procedures {#summary-stored-procedures} | {"source_file": "stored-procedures-and-prepared-statements.md"} | [
-0.02103213407099247,
0.11376088857650757,
0.003865812672302127,
0.017013277858495712,
-0.0799039751291275,
0.02389511466026306,
0.044968295842409134,
0.05914413183927536,
-0.05570331960916519,
-0.07032836973667145,
0.01437433436512947,
0.03305807709693909,
0.10130943357944489,
-0.03786708... |
dc76ba4a-b992-49c2-93fb-381e17e1aa44 | :::
For more details, see the
MySQL Interface documentation
and the
blog post on MySQL support
.
Summary {#summary}
ClickHouse alternatives to stored procedures {#summary-stored-procedures}
| Traditional Stored Procedure Pattern | ClickHouse Alternative |
|--------------------------------------|-----------------------------------------------------------------------------|
| Simple calculations and transformations | User-Defined Functions (UDFs) |
| Reusable parameterized queries | Parameterized Views |
| Pre-computed aggregations | Materialized Views |
| Scheduled batch processing | Refreshable Materialized Views |
| Complex multi-step ETL | Chained materialized views or external orchestration (Python, Airflow, dbt) |
| Business logic with control flow | Application code |
Use of query parameters {#summary-query-parameters}
Query parameters can be used for:
- Preventing SQL injection
- Parameterized queries with type safety
- Dynamic filtering in applications
- Reusable query templates
Related documentation {#related-documentation}
CREATE FUNCTION
- User-Defined Functions
CREATE VIEW
- Views including parameterized and materialized
SQL Syntax - Query Parameters
- Complete parameter syntax
Cascading Materialized Views
- Advanced materialized view patterns
Executable UDFs
- External function execution | {"source_file": "stored-procedures-and-prepared-statements.md"} | [
-0.029318129643797874,
-0.0220935121178627,
-0.08525171875953674,
0.030378852039575577,
-0.11095110327005386,
-0.039467327296733856,
-0.025260167196393013,
0.08195998519659042,
0.010165276005864143,
0.09411213546991348,
0.005864151753485203,
-0.0405522845685482,
0.13771472871303558,
-0.056... |
2a41942a-7ae3-4d19-b894-dd33273fec04 | slug: /guides/developer/time-series-filling-gaps
sidebar_label: 'Time Series - Gap Fill'
sidebar_position: 10
description: 'Filling gaps in time-series data.'
keywords: ['time series', 'gap fill']
title: 'Filling gaps in time-series data'
doc_type: 'guide'
Filling gaps in time-series data
When working with time-series data, there can be gaps in the data due to missing data or inactivity.
Typically, we don't want those gaps to exist when we query the data. In this case, the
WITH FILL
clause can come in handy.
This guide discusses how to use
WITH FILL
to fill gaps in your time-series data.
Setup {#setup}
Imagine we've got the following table that stores metadata on images generated by a GenAI image service:
sql
CREATE TABLE images
(
`id` String,
`timestamp` DateTime64(3),
`height` Int64,
`width` Int64,
`size` Int64
)
ENGINE = MergeTree
ORDER BY (size, height, width);
Let's import some records:
sql
INSERT INTO images VALUES (1088619203512250448, '2023-03-24 00:24:03.684', 1536, 1536, 2207289);
INSERT INTO images VALUES (1088619204040736859, '2023-03-24 00:24:03.810', 1024, 1024, 1928974);
INSERT INTO images VALUES (1088619204749561989, '2023-03-24 00:24:03.979', 1024, 1024, 1275619);
INSERT INTO images VALUES (1088619206431477862, '2023-03-24 00:24:04.380', 2048, 2048, 5985703);
INSERT INTO images VALUES (1088619206905434213, '2023-03-24 00:24:04.493', 1024, 1024, 1558455);
INSERT INTO images VALUES (1088619208524431510, '2023-03-24 00:24:04.879', 1024, 1024, 1494869);
INSERT INTO images VALUES (1088619208425437515, '2023-03-24 00:24:05.160', 1024, 1024, 1538451);
Querying by bucket {#querying-by-bucket}
We're going to explore the images created between
00:24:03
and
00:24:04
on the 24th March 2023, so let's create some parameters for those points in time:
sql
SET param_start = '2023-03-24 00:24:03',
param_end = '2023-03-24 00:24:04';
Next, we'll write a query that groups the data into 100ms buckets and returns the count of images created in that bucket:
sql
SELECT
toStartOfInterval(timestamp, toIntervalMillisecond(100)) AS bucket,
count() AS count
FROM MidJourney.images
WHERE (timestamp >= {start:String}) AND (timestamp <= {end:String})
GROUP BY ALL
ORDER BY bucket ASC
response
βββββββββββββββββββbucketββ¬βcountββ
β 2023-03-24 00:24:03.600 β 1 β
β 2023-03-24 00:24:03.800 β 1 β
β 2023-03-24 00:24:03.900 β 1 β
β 2023-03-24 00:24:04.300 β 1 β
β 2023-03-24 00:24:04.400 β 1 β
β 2023-03-24 00:24:04.800 β 1 β
βββββββββββββββββββββββββββ΄ββββββββ
The result set only includes the buckets where an image was created, but for time-series analysis, we might want to return each 100ms bucket, even if it doesn't have any entries.
WITH FILL {#with-fill} | {"source_file": "time-series-filling-gaps.md"} | [
-0.09230935573577881,
0.052086614072322845,
0.015079556964337826,
0.036528900265693665,
-0.026428550481796265,
-0.0009487329516559839,
-0.022396817803382874,
0.06598414480686188,
-0.0033397015649825335,
-0.03839379921555519,
0.038634270429611206,
-0.02126338705420494,
0.05101817473769188,
... |
1a4ab474-7e90-46d5-a159-ff117729858c | WITH FILL {#with-fill}
We can use the
WITH FILL
clause to fill in these gaps.
We'll also specify the
STEP
, which is the size of the gaps to fill.
This defaults to 1 second for
DateTime
types, but we'd like to fill gaps of 100ms in length, so let's an interval of 100ms as our step value:
sql
SELECT
toStartOfInterval(timestamp, toIntervalMillisecond(100)) AS bucket,
count() AS count
FROM MidJourney.images
WHERE (timestamp >= {start:String}) AND (timestamp <= {end:String})
GROUP BY ALL
ORDER BY bucket ASC
WITH FILL
STEP toIntervalMillisecond(100);
response
βββββββββββββββββββbucketββ¬βcountββ
β 2023-03-24 00:24:03.600 β 1 β
β 2023-03-24 00:24:03.700 β 0 β
β 2023-03-24 00:24:03.800 β 1 β
β 2023-03-24 00:24:03.900 β 1 β
β 2023-03-24 00:24:04.000 β 0 β
β 2023-03-24 00:24:04.100 β 0 β
β 2023-03-24 00:24:04.200 β 0 β
β 2023-03-24 00:24:04.300 β 1 β
β 2023-03-24 00:24:04.400 β 1 β
β 2023-03-24 00:24:04.500 β 0 β
β 2023-03-24 00:24:04.600 β 0 β
β 2023-03-24 00:24:04.700 β 0 β
β 2023-03-24 00:24:04.800 β 1 β
βββββββββββββββββββββββββββ΄ββββββββ
We can see that the gaps have been filled with 0 values in the
count
column.
WITH FILL...FROM {#with-fillfrom}
There is, however, still a gap at the beginning of the time range, which we can fix by specifying
FROM
:
sql
SELECT
toStartOfInterval(timestamp, toIntervalMillisecond(100)) AS bucket,
count() AS count
FROM MidJourney.images
WHERE (timestamp >= {start:String}) AND (timestamp <= {end:String})
GROUP BY ALL
ORDER BY bucket ASC
WITH FILL
FROM toDateTime64({start:String}, 3)
STEP toIntervalMillisecond(100);
response
βββββββββββββββββββbucketββ¬βcountββ
β 2023-03-24 00:24:03.000 β 0 β
β 2023-03-24 00:24:03.100 β 0 β
β 2023-03-24 00:24:03.200 β 0 β
β 2023-03-24 00:24:03.300 β 0 β
β 2023-03-24 00:24:03.400 β 0 β
β 2023-03-24 00:24:03.500 β 0 β
β 2023-03-24 00:24:03.600 β 1 β
β 2023-03-24 00:24:03.700 β 0 β
β 2023-03-24 00:24:03.800 β 1 β
β 2023-03-24 00:24:03.900 β 1 β
β 2023-03-24 00:24:04.000 β 0 β
β 2023-03-24 00:24:04.100 β 0 β
β 2023-03-24 00:24:04.200 β 0 β
β 2023-03-24 00:24:04.300 β 1 β
β 2023-03-24 00:24:04.400 β 1 β
β 2023-03-24 00:24:04.500 β 0 β
β 2023-03-24 00:24:04.600 β 0 β
β 2023-03-24 00:24:04.700 β 0 β
β 2023-03-24 00:24:04.800 β 1 β
βββββββββββββββββββββββββββ΄ββββββββ
We can see from the results that the buckets from
00:24:03.000
to
00:24:03.500
all now appear.
WITH FILL...TO {#with-fillto}
We're still missing some buckets from the end of the time range though, which we can fill by providing a
TO
value.
TO
is not inclusive, so we'll add a small amount to the end time to make sure that it's included: | {"source_file": "time-series-filling-gaps.md"} | [
-0.043801866471767426,
0.03438166528940201,
0.07324202358722687,
0.02690792642533779,
-0.052324797958135605,
0.05945141986012459,
-0.02871505729854107,
0.04854103550314903,
0.009052051231265068,
0.007862257771193981,
-0.02401258423924446,
-0.06082388386130333,
0.03705452382564545,
-0.02456... |
8286ab62-0834-44c0-974e-f35d74d182dc | TO
is not inclusive, so we'll add a small amount to the end time to make sure that it's included:
sql
SELECT
toStartOfInterval(timestamp, toIntervalMillisecond(100)) AS bucket,
count() AS count
FROM MidJourney.images
WHERE (timestamp >= {start:String}) AND (timestamp <= {end:String})
GROUP BY ALL
ORDER BY bucket ASC
WITH FILL
FROM toDateTime64({start:String}, 3)
TO toDateTime64({end:String}, 3) + INTERVAL 1 millisecond
STEP toIntervalMillisecond(100);
response
βββββββββββββββββββbucketββ¬βcountββ
β 2023-03-24 00:24:03.000 β 0 β
β 2023-03-24 00:24:03.100 β 0 β
β 2023-03-24 00:24:03.200 β 0 β
β 2023-03-24 00:24:03.300 β 0 β
β 2023-03-24 00:24:03.400 β 0 β
β 2023-03-24 00:24:03.500 β 0 β
β 2023-03-24 00:24:03.600 β 1 β
β 2023-03-24 00:24:03.700 β 0 β
β 2023-03-24 00:24:03.800 β 1 β
β 2023-03-24 00:24:03.900 β 1 β
β 2023-03-24 00:24:04.000 β 0 β
β 2023-03-24 00:24:04.100 β 0 β
β 2023-03-24 00:24:04.200 β 0 β
β 2023-03-24 00:24:04.300 β 1 β
β 2023-03-24 00:24:04.400 β 1 β
β 2023-03-24 00:24:04.500 β 0 β
β 2023-03-24 00:24:04.600 β 0 β
β 2023-03-24 00:24:04.700 β 0 β
β 2023-03-24 00:24:04.800 β 1 β
β 2023-03-24 00:24:04.900 β 0 β
β 2023-03-24 00:24:05.000 β 0 β
βββββββββββββββββββββββββββ΄ββββββββ
The gaps have all now been filled and we have entries for every 100 ms from
00:24:03.000
to
00:24:05.000
.
Cumulative count {#cumulative-count}
Let's say we now want to keep a cumulative count of the number of images created across the buckets.
We can do this by adding a
cumulative
column, as shown below:
sql
SELECT
toStartOfInterval(timestamp, toIntervalMillisecond(100)) AS bucket,
count() AS count,
sum(count) OVER (ORDER BY bucket) AS cumulative
FROM MidJourney.images
WHERE (timestamp >= {start:String}) AND (timestamp <= {end:String})
GROUP BY ALL
ORDER BY bucket ASC
WITH FILL
FROM toDateTime64({start:String}, 3)
TO toDateTime64({end:String}, 3) + INTERVAL 1 millisecond
STEP toIntervalMillisecond(100); | {"source_file": "time-series-filling-gaps.md"} | [
0.016519932076334953,
0.01309877261519432,
0.06240837648510933,
0.0650663673877716,
0.0010626842267811298,
0.017494460567831993,
-0.00974984560161829,
0.03568728640675545,
0.039753545075654984,
0.02109546586871147,
-0.010777573101222515,
-0.05804100260138512,
0.053306594491004944,
-0.00510... |
ab9128fd-b7bf-4389-a725-081d8815c967 | response
βββββββββββββββββββbucketββ¬βcountββ¬βcumulativeββ
β 2023-03-24 00:24:03.000 β 0 β 0 β
β 2023-03-24 00:24:03.100 β 0 β 0 β
β 2023-03-24 00:24:03.200 β 0 β 0 β
β 2023-03-24 00:24:03.300 β 0 β 0 β
β 2023-03-24 00:24:03.400 β 0 β 0 β
β 2023-03-24 00:24:03.500 β 0 β 0 β
β 2023-03-24 00:24:03.600 β 1 β 1 β
β 2023-03-24 00:24:03.700 β 0 β 0 β
β 2023-03-24 00:24:03.800 β 1 β 2 β
β 2023-03-24 00:24:03.900 β 1 β 3 β
β 2023-03-24 00:24:04.000 β 0 β 0 β
β 2023-03-24 00:24:04.100 β 0 β 0 β
β 2023-03-24 00:24:04.200 β 0 β 0 β
β 2023-03-24 00:24:04.300 β 1 β 4 β
β 2023-03-24 00:24:04.400 β 1 β 5 β
β 2023-03-24 00:24:04.500 β 0 β 0 β
β 2023-03-24 00:24:04.600 β 0 β 0 β
β 2023-03-24 00:24:04.700 β 0 β 0 β
β 2023-03-24 00:24:04.800 β 1 β 6 β
β 2023-03-24 00:24:04.900 β 0 β 0 β
β 2023-03-24 00:24:05.000 β 0 β 0 β
βββββββββββββββββββββββββββ΄ββββββββ΄βββββββββββββ
The values in the cumulative column aren't working how we'd like them to.
WITH FILL...INTERPOLATE {#with-fillinterpolate}
Any rows that have
0
in the
count
column also have
0
in the cumulative column, whereas we'd rather it use the previous value in the
cumulative
column.
We can do this by using the
INTERPOLATE
clause, as shown below:
sql
SELECT
toStartOfInterval(timestamp, toIntervalMillisecond(100)) AS bucket,
count() AS count,
sum(count) OVER (ORDER BY bucket) AS cumulative
FROM MidJourney.images
WHERE (timestamp >= {start:String}) AND (timestamp <= {end:String})
GROUP BY ALL
ORDER BY bucket ASC
WITH FILL
FROM toDateTime64({start:String}, 3)
TO toDateTime64({end:String}, 3) + INTERVAL 100 millisecond
STEP toIntervalMillisecond(100)
INTERPOLATE (cumulative); | {"source_file": "time-series-filling-gaps.md"} | [
-0.032954610884189606,
0.024199003353714943,
-0.020074812695384026,
0.021481815725564957,
-0.0375993587076664,
-0.07484956830739975,
0.02150077559053898,
-0.04740770161151886,
0.009106731973588467,
0.03506476432085037,
0.04937591403722763,
-0.0710916817188263,
0.017696456983685493,
-0.0053... |
a9a734e9-5e00-4f23-898c-1ecc62fcbcc5 | response
βββββββββββββββββββbucketββ¬βcountββ¬βcumulativeββ
β 2023-03-24 00:24:03.000 β 0 β 0 β
β 2023-03-24 00:24:03.100 β 0 β 0 β
β 2023-03-24 00:24:03.200 β 0 β 0 β
β 2023-03-24 00:24:03.300 β 0 β 0 β
β 2023-03-24 00:24:03.400 β 0 β 0 β
β 2023-03-24 00:24:03.500 β 0 β 0 β
β 2023-03-24 00:24:03.600 β 1 β 1 β
β 2023-03-24 00:24:03.700 β 0 β 1 β
β 2023-03-24 00:24:03.800 β 1 β 2 β
β 2023-03-24 00:24:03.900 β 1 β 3 β
β 2023-03-24 00:24:04.000 β 0 β 3 β
β 2023-03-24 00:24:04.100 β 0 β 3 β
β 2023-03-24 00:24:04.200 β 0 β 3 β
β 2023-03-24 00:24:04.300 β 1 β 4 β
β 2023-03-24 00:24:04.400 β 1 β 5 β
β 2023-03-24 00:24:04.500 β 0 β 5 β
β 2023-03-24 00:24:04.600 β 0 β 5 β
β 2023-03-24 00:24:04.700 β 0 β 5 β
β 2023-03-24 00:24:04.800 β 1 β 6 β
β 2023-03-24 00:24:04.900 β 0 β 6 β
β 2023-03-24 00:24:05.000 β 0 β 6 β
βββββββββββββββββββββββββββ΄ββββββββ΄βββββββββββββ
That looks much better.
And now to finish it off, let's add a bar chart using the
bar
function, not forgetting to add our new column to the
INTERPOLATE
clause.
sql
SELECT
toStartOfInterval(timestamp, toIntervalMillisecond(100)) AS bucket,
count() AS count,
sum(count) OVER (ORDER BY bucket) AS cumulative,
bar(cumulative, 0, 10, 10) AS barChart
FROM MidJourney.images
WHERE (timestamp >= {start:String}) AND (timestamp <= {end:String})
GROUP BY ALL
ORDER BY bucket ASC
WITH FILL
FROM toDateTime64({start:String}, 3)
TO toDateTime64({end:String}, 3) + INTERVAL 100 millisecond
STEP toIntervalMillisecond(100)
INTERPOLATE (cumulative, barChart); | {"source_file": "time-series-filling-gaps.md"} | [
-0.03340337797999382,
0.019577793776988983,
-0.020606447011232376,
0.013767944648861885,
-0.03621139004826546,
-0.06884034723043442,
0.019268611446022987,
-0.05054209753870964,
0.005333764478564262,
0.0421278290450573,
0.05491892620921135,
-0.06945148855447769,
0.01968749612569809,
-0.0069... |
b50d3a43-0c9d-484f-9be1-91709fb89e58 | response
βββββββββββββββββββbucketββ¬βcountββ¬βcumulativeββ¬βbarChartββ
β 2023-03-24 00:24:03.000 β 0 β 0 β β
β 2023-03-24 00:24:03.100 β 0 β 0 β β
β 2023-03-24 00:24:03.200 β 0 β 0 β β
β 2023-03-24 00:24:03.300 β 0 β 0 β β
β 2023-03-24 00:24:03.400 β 0 β 0 β β
β 2023-03-24 00:24:03.500 β 0 β 0 β β
β 2023-03-24 00:24:03.600 β 1 β 1 β β β
β 2023-03-24 00:24:03.700 β 0 β 1 β β β
β 2023-03-24 00:24:03.800 β 1 β 2 β ββ β
β 2023-03-24 00:24:03.900 β 1 β 3 β βββ β
β 2023-03-24 00:24:04.000 β 0 β 3 β βββ β
β 2023-03-24 00:24:04.100 β 0 β 3 β βββ β
β 2023-03-24 00:24:04.200 β 0 β 3 β βββ β
β 2023-03-24 00:24:04.300 β 1 β 4 β ββββ β
β 2023-03-24 00:24:04.400 β 1 β 5 β βββββ β
β 2023-03-24 00:24:04.500 β 0 β 5 β βββββ β
β 2023-03-24 00:24:04.600 β 0 β 5 β βββββ β
β 2023-03-24 00:24:04.700 β 0 β 5 β βββββ β
β 2023-03-24 00:24:04.800 β 1 β 6 β ββββββ β
β 2023-03-24 00:24:04.900 β 0 β 6 β ββββββ β
β 2023-03-24 00:24:05.000 β 0 β 6 β ββββββ β
βββββββββββββββββββββββββββ΄ββββββββ΄βββββββββββββ΄βββββββββββ | {"source_file": "time-series-filling-gaps.md"} | [
-0.0358058400452137,
0.01498372945934534,
-0.020898709073662758,
0.013728918507695198,
-0.03708581626415253,
-0.06702054291963577,
0.024163318797945976,
-0.05532390996813774,
-0.0005097202956676483,
0.045774251222610474,
0.05480216443538666,
-0.06376297026872635,
0.024870552122592926,
-0.0... |
a262d811-bfc9-4f17-9f9a-a139da6924b9 | slug: /guides/developer/deduplicating-inserts-on-retries
title: 'Deduplicating Inserts on Retries'
description: 'Preventing duplicate data when retrying insert operations'
keywords: ['deduplication', 'deduplicate', 'insert retries', 'inserts']
doc_type: 'guide'
Insert operations can sometimes fail due to errors such as timeouts. When inserts fail, data may or may not have been successfully inserted. This guide covers how to enable deduplication on insert retries such that the same data does not get inserted more than once.
When an insert is retried, ClickHouse tries to determine whether the data has already been successfully inserted. If the inserted data is marked as a duplicate, ClickHouse does not insert it into the destination table. However, the user will still receive a successful operation status as if the data had been inserted normally.
Limitations {#limitations}
Uncertain insert status {#uncertain-insert-status}
The user must retry the insert operation until it succeeds. If all retries fail, it is impossible to determine whether the data was inserted or not. When materialized views are involved, it is also unclear in which tables the data may have appeared. The materialized views could be out of sync with the source table.
Deduplication window limit {#deduplication-window-limit}
If more than
*_deduplication_window
other insert operations occur during the retry sequence, deduplication may not work as intended. In this case, the same data can be inserted multiple times.
Enabling insert deduplication on retries {#enabling-insert-deduplication-on-retries}
Insert deduplication for tables {#insert-deduplication-for-tables}
Only
*MergeTree
engines support deduplication on insertion.
For
*ReplicatedMergeTree
engines, insert deduplication is enabled by default and is controlled by the
replicated_deduplication_window
and
replicated_deduplication_window_seconds
settings. For non-replicated
*MergeTree
engines, deduplication is controlled by the
non_replicated_deduplication_window
setting.
The settings above determine the parameters of the deduplication log for a table. The deduplication log stores a finite number of
block_id
s, which determine how deduplication works (see below).
Query-level insert deduplication {#query-level-insert-deduplication}
The setting
insert_deduplicate=1
enables deduplication at the query level. Note that if you insert data with
insert_deduplicate=0
, that data cannot be deduplicated even if you retry an insert with
insert_deduplicate=1
. This is because the
block_id
s are not written for blocks during inserts with
insert_deduplicate=0
.
How insert deduplication works {#how-insert-deduplication-works}
When data is inserted into ClickHouse, it splits data into blocks based on the number of rows and bytes. | {"source_file": "deduplicating-inserts-on-retries.md"} | [
-0.08594169467687607,
-0.07647299021482468,
0.02797812595963478,
0.024403076618909836,
-0.06148282811045647,
-0.07767978310585022,
-0.03471725434064865,
-0.08648955076932907,
0.0007888441323302686,
0.01837781071662903,
0.11222560703754425,
-0.007084608543664217,
0.0822327584028244,
-0.0774... |
a5e74ee2-ac58-46e1-9104-f78ed41e714c | How insert deduplication works {#how-insert-deduplication-works}
When data is inserted into ClickHouse, it splits data into blocks based on the number of rows and bytes.
For tables using
*MergeTree
engines, each block is assigned a unique
block_id
, which is a hash of the data in that block. This
block_id
is used as a unique key for the insert operation. If the same
block_id
is found in the deduplication log, the block is considered a duplicate and is not inserted into the table.
This approach works well for cases where inserts contain different data. However, if the same data is inserted multiple times intentionally, you need to use the
insert_deduplication_token
setting to control the deduplication process. This setting allows you to specify a unique token for each insert, which ClickHouse uses to determine whether the data is a duplicate.
For
INSERT ... VALUES
queries, splitting the inserted data into blocks is deterministic and is determined by settings. Therefore, users should retry insertions with the same settings values as the initial operation.
For
INSERT ... SELECT
queries, it is important that the
SELECT
part of the query returns the same data in the same order for each operation. Note that this is hard to achieve in practical usage. To ensure stable data order on retries, define a precise
ORDER BY
section in the
SELECT
part of the query. Keep in mind that it is possible that the selected table could be updated between retries: the result data could have changed and deduplication will not occur. Additionally, in situations where you are inserting large amounts of data, it is possible that the number of blocks after inserts can overflow the deduplication log window, and ClickHouse won't know to deduplicate the blocks.
Insert deduplication with materialized views {#insert-deduplication-with-materialized-views}
When a table has one or more materialized views, the inserted data is also inserted into the destination of those views with the defined transformations. The transformed data is also deduplicated on retries. ClickHouse performs deduplications for materialized views in the same way it deduplicates data inserted into the target table.
You can control this process using the following settings for the source table:
replicated_deduplication_window
replicated_deduplication_window_seconds
non_replicated_deduplication_window
You have to also enable the user profile setting
deduplicate_blocks_in_dependent_materialized_views
.
With enabled setting
insert_deduplicate=1
an inserted data is deduplicated in source table. The setting
deduplicate_blocks_in_dependent_materialized_views=1
additionally enables deduplication in dependant tables. You have to enable both if full deduplication is desired. | {"source_file": "deduplicating-inserts-on-retries.md"} | [
-0.06403052061796188,
-0.08616644889116287,
0.03400914743542671,
0.0072028664872050285,
-0.05149868130683899,
-0.10194317996501923,
0.03396274894475937,
-0.031081542372703552,
0.05198797956109047,
0.017983635887503624,
0.09596031159162521,
0.007100086193531752,
0.07238931953907013,
-0.1110... |
4c397fa5-9a31-4863-8be3-40db303e06a1 | When inserting blocks into tables under materialized views, ClickHouse calculates the
block_id
by hashing a string that combines the
block_id
s from the source table and additional identifiers. This ensures accurate deduplication within materialized views, allowing data to be distinguished based on its original insertion, regardless of any transformations applied before reaching the destination table under the materialized view.
Examples {#examples}
Identical blocks after materialized view transformations {#identical-blocks-after-materialized-view-transformations}
Identical blocks, which have been generated during transformation inside a materialized view, are not deduplicated because they are based on different inserted data.
Here is an example:
``sql
CREATE TABLE dst
(
key
Int64,
value` String
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS non_replicated_deduplication_window=1000;
CREATE MATERIALIZED VIEW mv_dst
(
key
Int64,
value
String
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS non_replicated_deduplication_window=1000
AS SELECT
0 AS key,
value AS value
FROM dst;
```
sql
SET max_block_size=1;
SET min_insert_block_size_rows=0;
SET min_insert_block_size_bytes=0;
The settings above allow us to select from a table with a series of blocks containing only one row. These small blocks are not squashed and remain the same until they are inserted into a table.
sql
SET deduplicate_blocks_in_dependent_materialized_views=1;
We need to enable deduplication in materialized view:
```sql
INSERT INTO dst SELECT
number + 1 AS key,
IF(key = 0, 'A', 'B') AS value
FROM numbers(2);
SELECT
*,
_part
FROM dst
ORDER BY all;
ββkeyββ¬βvalueββ¬β_partββββββ
β 1 β B β all_0_0_0 β
β 2 β B β all_1_1_0 β
βββββββ΄ββββββββ΄ββββββββββββ
```
Here we see that two parts have been inserted into the
dst
table. 2 blocks from select -- 2 parts on insert. The parts contains different data.
```sql
SELECT
*,
_part
FROM mv_dst
ORDER BY all;
ββkeyββ¬βvalueββ¬β_partββββββ
β 0 β B β all_0_0_0 β
β 0 β B β all_1_1_0 β
βββββββ΄ββββββββ΄ββββββββββββ
```
Here we see that 2 parts have been inserted into the
mv_dst
table. That parts contain the same data, however they are not deduplicated.
```sql
INSERT INTO dst SELECT
number + 1 AS key,
IF(key = 0, 'A', 'B') AS value
FROM numbers(2);
SELECT
*,
_part
FROM dst
ORDER BY all;
ββkeyββ¬βvalueββ¬β_partββββββ
β 1 β B β all_0_0_0 β
β 2 β B β all_1_1_0 β
βββββββ΄ββββββββ΄ββββββββββββ
SELECT
*,
_part
FROM mv_dst
ORDER by all;
ββkeyββ¬βvalueββ¬β_partββββββ
β 0 β B β all_0_0_0 β
β 0 β B β all_1_1_0 β
βββββββ΄ββββββββ΄ββββββββββββ
```
Here we see that when we retry the inserts, all data is deduplicated. Deduplication works for both the
dst
and
mv_dst
tables.
Identical blocks on insertion {#identical-blocks-on-insertion} | {"source_file": "deduplicating-inserts-on-retries.md"} | [
-0.10084320604801178,
-0.04505837336182594,
-0.024941835552453995,
-0.0221952423453331,
-0.06662753969430923,
-0.06211109459400177,
-0.03030429594218731,
-0.05738792195916176,
0.047135427594184875,
-0.020437249913811684,
0.03837375342845917,
-0.020662635564804077,
0.03327876701951027,
-0.0... |
fa413b26-413c-4ff5-a2e0-6474049bf3d5 | Here we see that when we retry the inserts, all data is deduplicated. Deduplication works for both the
dst
and
mv_dst
tables.
Identical blocks on insertion {#identical-blocks-on-insertion}
``sql
CREATE TABLE dst
(
key
Int64,
value` String
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS non_replicated_deduplication_window=1000;
SET max_block_size=1;
SET min_insert_block_size_rows=0;
SET min_insert_block_size_bytes=0;
```
Insertion:
```sql
INSERT INTO dst SELECT
0 AS key,
'A' AS value
FROM numbers(2);
SELECT
'from dst',
*,
_part
FROM dst
ORDER BY all;
ββ'from dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from dst β 0 β A β all_0_0_0 β
ββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
```
With the settings above, two blocks result from selectβ as a result, there should be two blocks for insertion into table
dst
. However, we see that only one block has been inserted into table
dst
. This occurred because the second block has been deduplicated. It has the same data and the key for deduplication
block_id
which is calculated as a hash from the inserted data. This behaviour is not what was expected. Such cases are a rare occurrence, but theoretically is possible. In order to handle such cases correctly, the user has to provide a
insert_deduplication_token
. Let's fix this with the following examples:
Identical blocks in insertion with
insert_deduplication_token
{#identical-blocks-in-insertion-with-insert_deduplication_token}
``sql
CREATE TABLE dst
(
key
Int64,
value` String
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS non_replicated_deduplication_window=1000;
SET max_block_size=1;
SET min_insert_block_size_rows=0;
SET min_insert_block_size_bytes=0;
```
Insertion:
```sql
INSERT INTO dst SELECT
0 AS key,
'A' AS value
FROM numbers(2)
SETTINGS insert_deduplication_token='some_user_token';
SELECT
'from dst',
*,
_part
FROM dst
ORDER BY all;
ββ'from dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from dst β 0 β A β all_2_2_0 β
β from dst β 0 β A β all_3_3_0 β
ββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
```
Two identical blocks have been inserted as expected.
```sql
SELECT 'second attempt';
INSERT INTO dst SELECT
0 AS key,
'A' AS value
FROM numbers(2)
SETTINGS insert_deduplication_token='some_user_token';
SELECT
'from dst',
*,
_part
FROM dst
ORDER BY all;
ββ'from dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from dst β 0 β A β all_2_2_0 β
β from dst β 0 β A β all_3_3_0 β
ββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
```
Retried insertion is deduplicated as expected.
```sql
SELECT 'third attempt';
INSERT INTO dst SELECT
1 AS key,
'b' AS value
FROM numbers(2)
SETTINGS insert_deduplication_token='some_user_token';
SELECT
'from dst',
*,
_part
FROM dst
ORDER BY all;
ββ'from dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from dst β 0 β A β all_2_2_0 β
β from dst β 0 β A β all_3_3_0 β
ββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
``` | {"source_file": "deduplicating-inserts-on-retries.md"} | [
-0.09200529009103775,
-0.06608344614505768,
0.06363231688737869,
-0.025582678616046906,
-0.046381428837776184,
-0.11948801577091217,
-0.04318191111087799,
0.03009464219212532,
0.020213047042489052,
0.022615423426032066,
0.08990716189146042,
0.018215959891676903,
0.01856185309588909,
-0.125... |
3a5a75d3-aa30-4440-b460-cec2a71e47c6 | ββ'from dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from dst β 0 β A β all_2_2_0 β
β from dst β 0 β A β all_3_3_0 β
ββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
```
That insertion is also deduplicated even though it contains different inserted data. Note that
insert_deduplication_token
has higher priority: ClickHouse does not use the hash sum of data when
insert_deduplication_token
is provided.
Different insert operations generate the same data after transformation in the underlying table of the materialized view {#different-insert-operations-generate-the-same-data-after-transformation-in-the-underlying-table-of-the-materialized-view}
``sql
CREATE TABLE dst
(
key
Int64,
value` String
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS non_replicated_deduplication_window=1000;
CREATE MATERIALIZED VIEW mv_dst
(
key
Int64,
value
String
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS non_replicated_deduplication_window=1000
AS SELECT
0 AS key,
value AS value
FROM dst;
SET deduplicate_blocks_in_dependent_materialized_views=1;
select 'first attempt';
INSERT INTO dst VALUES (1, 'A');
SELECT
'from dst',
*,
_part
FROM dst
ORDER by all;
ββ'from dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from dst β 1 β A β all_0_0_0 β
ββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
SELECT
'from mv_dst',
*,
_part
FROM mv_dst
ORDER by all;
ββ'from mv_dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from mv_dst β 0 β A β all_0_0_0 β
βββββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
select 'second attempt';
INSERT INTO dst VALUES (2, 'A');
SELECT
'from dst',
*,
_part
FROM dst
ORDER by all;
ββ'from dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from dst β 1 β A β all_0_0_0 β
β from dst β 2 β A β all_1_1_0 β
ββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
SELECT
'from mv_dst',
*,
_part
FROM mv_dst
ORDER by all;
ββ'from mv_dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from mv_dst β 0 β A β all_0_0_0 β
β from mv_dst β 0 β A β all_1_1_0 β
βββββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
```
We insert different data each time. However, the same data is inserted into the
mv_dst
table. Data is not deduplicated because the source data was different.
Different materialized view inserts into one underlying table with equivalent data {#different-materialized-view-inserts-into-one-underlying-table-with-equivalent-data}
``sql
CREATE TABLE dst
(
key
Int64,
value` String
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS non_replicated_deduplication_window=1000;
CREATE TABLE mv_dst
(
key
Int64,
value
String
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS non_replicated_deduplication_window=1000;
CREATE MATERIALIZED VIEW mv_first
TO mv_dst
AS SELECT
0 AS key,
value AS value
FROM dst;
CREATE MATERIALIZED VIEW mv_second
TO mv_dst
AS SELECT
0 AS key,
value AS value
FROM dst;
SET deduplicate_blocks_in_dependent_materialized_views=1;
select 'first attempt'; | {"source_file": "deduplicating-inserts-on-retries.md"} | [
-0.102481409907341,
-0.0920657068490982,
0.0007028327090665698,
-0.008099595084786415,
-0.07451651990413666,
-0.0666969045996666,
-0.0010134036419913173,
-0.02331509441137314,
0.02536202035844326,
0.0230778269469738,
0.10148047655820847,
-0.022327879443764687,
0.04233480617403984,
-0.10105... |
8772be09-1ea2-4680-a394-12401ce969ae | CREATE MATERIALIZED VIEW mv_second
TO mv_dst
AS SELECT
0 AS key,
value AS value
FROM dst;
SET deduplicate_blocks_in_dependent_materialized_views=1;
select 'first attempt';
INSERT INTO dst VALUES (1, 'A');
SELECT
'from dst',
*,
_part
FROM dst
ORDER by all;
ββ'from dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from dst β 1 β A β all_0_0_0 β
ββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
SELECT
'from mv_dst',
*,
_part
FROM mv_dst
ORDER by all;
ββ'from mv_dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from mv_dst β 0 β A β all_0_0_0 β
β from mv_dst β 0 β A β all_1_1_0 β
βββββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
```
Two equal blocks inserted to the table
mv_dst
(as expected).
```sql
SELECT 'second attempt';
INSERT INTO dst VALUES (1, 'A');
SELECT
'from dst',
*,
_part
FROM dst
ORDER BY all;
ββ'from dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from dst β 1 β A β all_0_0_0 β
ββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
SELECT
'from mv_dst',
*,
_part
FROM mv_dst
ORDER by all;
ββ'from mv_dst'ββ¬βkeyββ¬βvalueββ¬β_partββββββ
β from mv_dst β 0 β A β all_0_0_0 β
β from mv_dst β 0 β A β all_1_1_0 β
βββββββββββββββββ΄ββββββ΄ββββββββ΄ββββββββββββ
```
That retry operation is deduplicated on both tables
dst
and
mv_dst
. | {"source_file": "deduplicating-inserts-on-retries.md"} | [
-0.09321294724941254,
-0.10494853556156158,
0.03159638121724129,
-0.032827846705913544,
-0.07957469671964645,
-0.05825453996658325,
0.024087626487016678,
0.02458217553794384,
0.028518175706267357,
0.028444349765777588,
0.08142166584730148,
-0.004686996340751648,
0.014780621975660324,
-0.05... |
a95ef652-b29b-41da-9575-2683440933e4 | slug: /guides/developer/dynamic-column-selection
sidebar_label: 'Dynamic column selection'
title: 'Dynamic column selection'
description: 'Use alternative query languages in ClickHouse'
doc_type: 'guide'
keywords: ['dynamic column selection', 'regular expressions', 'APPLY modifier', 'advanced queries', 'developer guide']
Dynamic column selection
is a powerful but underutilized ClickHouse feature that allows you to select columns using regular expressions instead of naming each column individually. You can also apply functions to matching columns using the
APPLY
modifier, making it incredibly useful for data analysis and transformation tasks.
We're going to learn how to use this feature with help from the
New York taxis dataset
, which you can also find in the
ClickHouse SQL playground
.
Selecting columns that match a pattern {#selecting-columns}
Let's start with a common scenario: selecting only the columns that contain
_amount
from the NYC taxi dataset. Instead of manually typing each column name, we can use the
COLUMNS
expression with a regular expression:
sql
FROM nyc_taxi.trips
SELECT COLUMNS('.*_amount')
LIMIT 10;
Try this query in the SQL playground
This query returns the first 10 rows, but only for columns whose names match the pattern
.*_amount
(any characters followed by "_amount").
text
ββfare_amountββ¬βtip_amountββ¬βtolls_amountββ¬βtotal_amountββ
1. β 9 β 0 β 0 β 9.8 β
2. β 9 β 0 β 0 β 9.8 β
3. β 3.5 β 0 β 0 β 4.8 β
4. β 3.5 β 0 β 0 β 4.8 β
5. β 3.5 β 0 β 0 β 4.3 β
6. β 3.5 β 0 β 0 β 4.3 β
7. β 2.5 β 0 β 0 β 3.8 β
8. β 2.5 β 0 β 0 β 3.8 β
9. β 5 β 0 β 0 β 5.8 β
10. β 5 β 0 β 0 β 5.8 β
βββββββββββββββ΄βββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
Letβs say we also want to return columns that contain the terms
fee
or
tax
.
We can update the regular expression to include those:
sql
SELECT COLUMNS('.*_amount|fee|tax')
FROM nyc_taxi.trips
ORDER BY rand()
LIMIT 3;
Try this query in the SQL playground
text
ββfare_amountββ¬βmta_taxββ¬βtip_amountββ¬βtolls_amountββ¬βehail_feeββ¬βtotal_amountββ
1. β 5 β 0.5 β 1 β 0 β 0 β 7.8 β
2. β 12.5 β 0.5 β 0 β 0 β 0 β 13.8 β
3. β 4.5 β 0.5 β 1.66 β 0 β 0 β 9.96 β
βββββββββββββββ΄ββββββββββ΄βββββββββββββ΄βββββββββββββββ΄ββββββββββββ΄βββββββββββββββ
Selecting multiple patterns {#selecting-multiple-patterns}
We can combine multiple column patterns in a single query: | {"source_file": "dynamic-column-selection.md"} | [
0.006221771240234375,
0.002835232065990567,
-0.009159100241959095,
0.07201717793941498,
-0.012972600758075714,
-0.04328332841396332,
0.06751154363155365,
0.016090085729956627,
-0.03334709629416466,
-0.025377243757247925,
-0.007934914901852608,
-0.04312571883201599,
-0.03083956614136696,
-0... |
adfacd09-1a34-42a7-bb42-1b4ffbb1aab4 | Selecting multiple patterns {#selecting-multiple-patterns}
We can combine multiple column patterns in a single query:
sql
SELECT
COLUMNS('.*_amount'),
COLUMNS('.*_date.*')
FROM nyc_taxi.trips
LIMIT 5;
Try this query in the SQL playground
text
ββfare_amountββ¬βtip_amountββ¬βtolls_amountββ¬βtotal_amountββ¬βpickup_dateββ¬βββββpickup_datetimeββ¬βdropoff_dateββ¬ββββdropoff_datetimeββ
1. β 9 β 0 β 0 β 9.8 β 2001-01-01 β 2001-01-01 00:01:48 β 2001-01-01 β 2001-01-01 00:15:47 β
2. β 9 β 0 β 0 β 9.8 β 2001-01-01 β 2001-01-01 00:01:48 β 2001-01-01 β 2001-01-01 00:15:47 β
3. β 3.5 β 0 β 0 β 4.8 β 2001-01-01 β 2001-01-01 00:02:08 β 2001-01-01 β 2001-01-01 01:00:02 β
4. β 3.5 β 0 β 0 β 4.8 β 2001-01-01 β 2001-01-01 00:02:08 β 2001-01-01 β 2001-01-01 01:00:02 β
5. β 3.5 β 0 β 0 β 4.3 β 2001-01-01 β 2001-01-01 00:02:26 β 2001-01-01 β 2001-01-01 00:04:49 β
βββββββββββββββ΄βββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄ββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββββββ
Apply functions to all columns {#applying-functions}
We can also use the
APPLY
modifier to apply functions across every column.
For example, if we wanted to find the maximum value of each of those columns, we could run the following query:
sql
SELECT COLUMNS('.*_amount|fee|tax') APPLY(max)
FROM nyc_taxi.trips;
Try this query in the SQL playground
text
ββmax(fare_amount)ββ¬βmax(mta_tax)ββ¬βmax(tip_amount)ββ¬βmax(tolls_amount)ββ¬βmax(ehail_fee)ββ¬βmax(total_amount)ββ
1. β 998310 β 500000.5 β 3950588.8 β 7999.92 β 1.95 β 3950611.5 β
ββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββββββββ
Or maybe, weβd like to see the average instead:
sql
SELECT COLUMNS('.*_amount|fee|tax') APPLY(avg)
FROM nyc_taxi.trips
Try this query in the SQL playground
text
ββavg(fare_amount)ββ¬βββββββavg(mta_tax)ββ¬ββββavg(tip_amount)ββ¬ββavg(tolls_amount)ββ¬ββββββavg(ehail_fee)ββ¬ββavg(total_amount)ββ
1. β 11.8044154834777 β 0.4555942672733423 β 1.3469850969211845 β 0.2256511991414463 β 3.37600560437412e-9 β 14.423323722271563 β
ββββββββββββββββββββ΄βββββββββββββββββββββ΄βββββββββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββββ
Those values contain a lot of decimal places, but luckily we can fix that by chaining functions. In this case, weβll apply the avg function, followed by the round function:
sql
SELECT COLUMNS('.*_amount|fee|tax') APPLY(avg) APPLY(round)
FROM nyc_taxi.trips;
Try this query in the SQL playground | {"source_file": "dynamic-column-selection.md"} | [
0.026070857420563698,
-0.046109020709991455,
0.0621783472597599,
0.08540148288011551,
-0.023455284535884857,
-0.021024199202656746,
0.09192119538784027,
-0.03159573674201965,
-0.06506457179784775,
-0.005622918251901865,
0.04839582368731499,
-0.038279939442873,
-0.06296421587467194,
0.05411... |
15090e0c-04ec-457f-8b00-ffcdacf9cf49 | sql
SELECT COLUMNS('.*_amount|fee|tax') APPLY(avg) APPLY(round)
FROM nyc_taxi.trips;
Try this query in the SQL playground
text
ββround(avg(fare_amount))ββ¬βround(avg(mta_tax))ββ¬βround(avg(tip_amount))ββ¬βround(avg(tolls_amount))ββ¬βround(avg(ehail_fee))ββ¬βround(avg(total_amount))ββ
1. β 12 β 0 β 1 β 0 β 0 β 14 β
βββββββββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ΄ββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ
But that rounds the averages to whole numbers. If we want to round to, say, 2 decimal places, we can do that as well. As well as taking in functions, the
APPLY
modifier accepts a lambda, which gives us the flexibility to have the round function round our average values to 2 decimal places:
sql
SELECT COLUMNS('.*_amount|fee|tax') APPLY(avg) APPLY(x -> round(x, 2))
FROM nyc_taxi.trips;
Try this query in the SQL playground
text
ββround(avg(fare_amount), 2)ββ¬βround(avg(mta_tax), 2)ββ¬βround(avg(tip_amount), 2)ββ¬βround(avg(tolls_amount), 2)ββ¬βround(avg(ehail_fee), 2)ββ¬βround(avg(total_amount), 2)ββ
1. β 11.8 β 0.46 β 1.35 β 0.23 β 0 β 14.42 β
ββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββ
Replacing columns {#replacing-columns}
So far so good. But letβs say we want to adjust one of the values, while leaving the other ones as they are. For example, maybe we want to double the total amount and divide the MTA tax by 1.1. We can do that by using the
REPLACE
modifier, which will replace a column while leaving the other ones as they are.
sql
FROM nyc_taxi.trips
SELECT
COLUMNS('.*_amount|fee|tax')
REPLACE(
total_amount*2 AS total_amount,
mta_tax/1.1 AS mta_tax
)
APPLY(avg)
APPLY(col -> round(col, 2));
Try this query in the SQL playground
text
ββround(avg(fare_amount), 2)ββ¬βround(avg(diβ―, 1.1)), 2)ββ¬βround(avg(tip_amount), 2)ββ¬βround(avg(tolls_amount), 2)ββ¬βround(avg(ehail_fee), 2)ββ¬βround(avg(muβ―nt, 2)), 2)ββ
1. β 11.8 β 0.41 β 1.35 β 0.23 β 0 β 28.85 β
ββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ
Excluding columns {#excluding-columns}
We can also choose to exclude a field by using the
EXCEPT
modifier. For example, to remove the
tolls_amount
column, we would write the following query: | {"source_file": "dynamic-column-selection.md"} | [
0.03305928036570549,
0.002673738868907094,
0.012859963811933994,
0.026042604818940163,
-0.018296359106898308,
-0.056821759790182114,
0.08475182950496674,
0.04811432585120201,
-0.007002693600952625,
0.04502836987376213,
0.03780714422464371,
-0.13173142075538635,
0.0020732986740767956,
-0.01... |
1a9e55fb-2f87-47fb-89b4-136777104066 | Excluding columns {#excluding-columns}
We can also choose to exclude a field by using the
EXCEPT
modifier. For example, to remove the
tolls_amount
column, we would write the following query:
sql
FROM nyc_taxi.trips
SELECT
COLUMNS('.*_amount|fee|tax') EXCEPT(tolls_amount)
REPLACE(
total_amount*2 AS total_amount,
mta_tax/1.1 AS mta_tax
)
APPLY(avg)
APPLY(col -> round(col, 2));
Try this query in the SQL playground
text
ββround(avg(fare_amount), 2)ββ¬βround(avg(diβ―, 1.1)), 2)ββ¬βround(avg(tip_amount), 2)ββ¬βround(avg(ehail_fee), 2)ββ¬βround(avg(muβ―nt, 2)), 2)ββ
1. β 11.8 β 0.41 β 1.35 β 0 β 28.85 β
ββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ | {"source_file": "dynamic-column-selection.md"} | [
0.039174582809209824,
0.07940471917390823,
0.0637374073266983,
0.043046824634075165,
0.017467167228460312,
-0.039043862372636795,
0.05758379027247429,
-0.027693895623087883,
-0.0196936447173357,
0.04939626157283783,
0.081980861723423,
-0.09300052374601364,
0.028112344443798065,
-0.07779847... |
147cc125-7582-481f-8cf0-b40525f94f1e | slug: /guides/developer/cascading-materialized-views
title: 'Cascading Materialized Views'
description: 'How to use multiple materialized views from a source table.'
keywords: ['materialized view', 'aggregation']
doc_type: 'guide'
Cascading materialized views
This example demonstrates how to create a materialized view, and then how to cascade a second materialized view on to the first. In this page, you will see how to do it, many of the possibilities, and the limitations. Different use cases can be answered by creating a Materialized view using a second Materialized view as the source.
Example:
We will use a fake dataset with the number of views per hour for a group of domain names.
Our Goal
We need the data aggregated by month for each domain name,
We also need the data aggregated by year for each domain name.
You could choose one of these options:
Write queries that will read and aggregate the data during the SELECT request
Prepare the data at the ingest time to a new format
Prepare the data at the time of ingest to a specific aggregation.
Preparing the data using Materialized views will allow you to limit the amount of data and calculation ClickHouse needs to do, making your SELECT requests faster.
Source table for the materialized views {#source-table-for-the-materialized-views}
Create the source table, because our goals involve reporting on the aggregated data and not the individual rows, we can parse it, pass the information on to the Materialized Views, and discard the actual incoming data. This meets our goals and saves on storage so we will use the
Null
table engine.
sql
CREATE DATABASE IF NOT EXISTS analytics;
sql
CREATE TABLE analytics.hourly_data
(
`domain_name` String,
`event_time` DateTime,
`count_views` UInt64
)
ENGINE = Null
:::note
You can create a materialized view on a Null table. So the data written to the table will end up affecting the view, but the original raw data will still be discarded.
:::
Monthly aggregated table and materialized view {#monthly-aggregated-table-and-materialized-view}
For the first materialized view, we need to create the
Target
table, for this example, it will be
analytics.monthly_aggregated_data
and we will store the sum of the views by month and domain name.
sql
CREATE TABLE analytics.monthly_aggregated_data
(
`domain_name` String,
`month` Date,
`sumCountViews` AggregateFunction(sum, UInt64)
)
ENGINE = AggregatingMergeTree
ORDER BY (domain_name, month)
The materialized view that will forward the data on the target table will look like this:
sql
CREATE MATERIALIZED VIEW analytics.monthly_aggregated_data_mv
TO analytics.monthly_aggregated_data
AS
SELECT
toDate(toStartOfMonth(event_time)) AS month,
domain_name,
sumState(count_views) AS sumCountViews
FROM analytics.hourly_data
GROUP BY
domain_name,
month | {"source_file": "cascading-materialized-views.md"} | [
-0.03326321020722389,
-0.08525217324495316,
-0.037118349224328995,
0.017184145748615265,
-0.10154186934232712,
-0.07985465973615646,
-0.0662071630358696,
-0.03221581131219864,
-0.029136188328266144,
0.009134300984442234,
-0.006708646193146706,
-0.07880470156669617,
0.08537130802869797,
-0.... |
4582ca7d-8b7f-4de2-b4a7-f3f519914402 | Yearly aggregated table and materialized view {#yearly-aggregated-table-and-materialized-view}
Now we will create the second Materialized view that will be linked to our previous target table
monthly_aggregated_data
.
First, we will create a new target table that will store the sum of views aggregated by year for each domain name.
sql
CREATE TABLE analytics.year_aggregated_data
(
`domain_name` String,
`year` UInt16,
`sumCountViews` UInt64
)
ENGINE = SummingMergeTree()
ORDER BY (domain_name, year)
This step defines the cascade. The
FROM
statement will use the
monthly_aggregated_data
table, this means the data flow will be:
The data comes to the
hourly_data
table.
ClickHouse will forward the data received to the first materialized view
monthly_aggregated_data
table,
Finally, the data received in step 2 will be forwarded to the
year_aggregated_data
.
sql
CREATE MATERIALIZED VIEW analytics.year_aggregated_data_mv
TO analytics.year_aggregated_data
AS
SELECT
toYear(toStartOfYear(month)) AS year,
domain_name,
sumMerge(sumCountViews) AS sumCountViews
FROM analytics.monthly_aggregated_data
GROUP BY
domain_name,
year
:::note
A common misinterpretation when working with Materialized views is that data is read from the table, This is not how
Materialized views
work; the data forwarded is the inserted block, not the final result in your table.
Let's imagine in this example that the engine used in
monthly_aggregated_data
is a CollapsingMergeTree, the data forwarded to our second Materialized view
year_aggregated_data_mv
will not be the final result of the collapsed table, it will forward the block of data with the fields defined as in the
SELECT ... GROUP BY
.
If you are using CollapsingMergeTree, ReplacingMergeTree, or even SummingMergeTree and you plan to create a cascade Materialized view you need to understand the limitations described here.
:::
Sample data {#sample-data}
Now is the time to test our cascade materialized view by inserting some data:
sql
INSERT INTO analytics.hourly_data (domain_name, event_time, count_views)
VALUES ('clickhouse.com', '2019-01-01 10:00:00', 1),
('clickhouse.com', '2019-02-02 00:00:00', 2),
('clickhouse.com', '2019-02-01 00:00:00', 3),
('clickhouse.com', '2020-01-01 00:00:00', 6);
If you SELECT the contents of
analytics.hourly_data
you will see the following because the table engine is
Null
, but the data was processed.
sql
SELECT * FROM analytics.hourly_data
```response
Ok.
0 rows in set. Elapsed: 0.002 sec.
```
We have used a small dataset to be sure we can follow and compare the result with what we are expecting, once your flow is correct with a small data set, you could just move to a large amount of data.
Results {#results} | {"source_file": "cascading-materialized-views.md"} | [
-0.017553631216287613,
-0.02754240296781063,
-0.022075925022363663,
0.05624347925186157,
-0.05904817581176758,
-0.03447756543755531,
-0.027319684624671936,
-0.06660230457782745,
-0.06706679612398148,
0.022585678845643997,
0.04496501013636589,
-0.10473491251468658,
0.07933732122182846,
-0.0... |
e17d402b-0edd-4766-abfe-5e59de47a544 | Results {#results}
If you try to query the target table by selecting the
sumCountViews
field, you will see the binary representation (in some terminals), as the value is not stored as a number but as an AggregateFunction type.
To get the final result of the aggregation you should use the
-Merge
suffix.
You can see the special characters stored in AggregateFunction with this query:
sql
SELECT sumCountViews FROM analytics.monthly_aggregated_data
```response
ββsumCountViewsββ
β β
β β
β β
βββββββββββββββββ
3 rows in set. Elapsed: 0.003 sec.
```
Instead, let's try using the
Merge
suffix to get the
sumCountViews
value:
sql
SELECT
sumMerge(sumCountViews) AS sumCountViews
FROM analytics.monthly_aggregated_data;
```response
ββsumCountViewsββ
β 12 β
βββββββββββββββββ
1 row in set. Elapsed: 0.003 sec.
```
In the
AggregatingMergeTree
we have defined the
AggregateFunction
as
sum
, so we can use the
sumMerge
. When we use the function
avg
on the
AggregateFunction
, we will use
avgMerge
, and so forth.
sql
SELECT
month,
domain_name,
sumMerge(sumCountViews) AS sumCountViews
FROM analytics.monthly_aggregated_data
GROUP BY
domain_name,
month
Now we can review that the Materialized Views answer the goal we have defined.
Now that we have the data stored in the target table
monthly_aggregated_data
we can get the data aggregated by month for each domain name:
sql
SELECT
month,
domain_name,
sumMerge(sumCountViews) AS sumCountViews
FROM analytics.monthly_aggregated_data
GROUP BY
domain_name,
month
```response
βββββββmonthββ¬βdomain_nameβββββ¬βsumCountViewsββ
β 2020-01-01 β clickhouse.com β 6 β
β 2019-01-01 β clickhouse.com β 1 β
β 2019-02-01 β clickhouse.com β 5 β
ββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββββ
3 rows in set. Elapsed: 0.004 sec.
```
The data aggregated by year for each domain name:
sql
SELECT
year,
domain_name,
sum(sumCountViews)
FROM analytics.year_aggregated_data
GROUP BY
domain_name,
year
```response
ββyearββ¬βdomain_nameβββββ¬βsum(sumCountViews)ββ
β 2019 β clickhouse.com β 6 β
β 2020 β clickhouse.com β 6 β
ββββββββ΄βββββββββββββββββ΄βββββββββββββββββββββ
2 rows in set. Elapsed: 0.004 sec.
```
Combining multiple source tables to single target table {#combining-multiple-source-tables-to-single-target-table}
Materialized views can also be used to combine multiple source tables into the same destination table. This is useful for creating a materialized view that is similar to a
UNION ALL
logic.
First, create two source tables representing different sets of metrics:
``sql
CREATE TABLE analytics.impressions
(
event_time
DateTime,
domain_name` String
) ENGINE = MergeTree ORDER BY (domain_name, event_time)
; | {"source_file": "cascading-materialized-views.md"} | [
0.0014082139823585749,
-0.045589640736579895,
-0.046092137694358826,
0.10466215014457703,
-0.07896380126476288,
0.020773179829120636,
0.10095921158790588,
0.004847686272114515,
0.030842745676636696,
-0.000031480602046940476,
0.0025445609353482723,
-0.10895318537950516,
0.05402909219264984,
... |
c9e55bc4-691a-4e72-8014-ed6b56a3dfbe | ``sql
CREATE TABLE analytics.impressions
(
event_time
DateTime,
domain_name` String
) ENGINE = MergeTree ORDER BY (domain_name, event_time)
;
CREATE TABLE analytics.clicks
(
event_time
DateTime,
domain_name
String
) ENGINE = MergeTree ORDER BY (domain_name, event_time)
;
```
Then create the
Target
table with the combined set of metrics:
sql
CREATE TABLE analytics.daily_overview
(
`on_date` Date,
`domain_name` String,
`impressions` SimpleAggregateFunction(sum, UInt64),
`clicks` SimpleAggregateFunction(sum, UInt64)
) ENGINE = AggregatingMergeTree ORDER BY (on_date, domain_name)
Create two materialized views pointing to the same
Target
table. You don't need to explicitly include the missing columns:
```sql
CREATE MATERIALIZED VIEW analytics.daily_impressions_mv
TO analytics.daily_overview
AS
SELECT
toDate(event_time) AS on_date,
domain_name,
count() AS impressions,
0 clicks ---<<<--- if you omit this, it will be the same 0
FROM
analytics.impressions
GROUP BY
toDate(event_time) AS on_date,
domain_name
;
CREATE MATERIALIZED VIEW analytics.daily_clicks_mv
TO analytics.daily_overview
AS
SELECT
toDate(event_time) AS on_date,
domain_name,
count() AS clicks,
0 impressions ---<<<--- if you omit this, it will be the same 0
FROM
analytics.clicks
GROUP BY
toDate(event_time) AS on_date,
domain_name
;
```
Now when you insert values those values will be aggregated to their respective columns in the
Target
table:
```sql
INSERT INTO analytics.impressions (domain_name, event_time)
VALUES ('clickhouse.com', '2019-01-01 00:00:00'),
('clickhouse.com', '2019-01-01 12:00:00'),
('clickhouse.com', '2019-02-01 00:00:00'),
('clickhouse.com', '2019-03-01 00:00:00')
;
INSERT INTO analytics.clicks (domain_name, event_time)
VALUES ('clickhouse.com', '2019-01-01 00:00:00'),
('clickhouse.com', '2019-01-01 12:00:00'),
('clickhouse.com', '2019-03-01 00:00:00')
;
```
The combined impressions and clicks together in the
Target
table:
sql
SELECT
on_date,
domain_name,
sum(impressions) AS impressions,
sum(clicks) AS clicks
FROM
analytics.daily_overview
GROUP BY
on_date,
domain_name
;
This query should output something like:
```response
βββββon_dateββ¬βdomain_nameβββββ¬βimpressionsββ¬βclicksββ
β 2019-01-01 β clickhouse.com β 2 β 2 β
β 2019-03-01 β clickhouse.com β 1 β 1 β
β 2019-02-01 β clickhouse.com β 1 β 0 β
ββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββ΄βββββββββ
3 rows in set. Elapsed: 0.018 sec.
``` | {"source_file": "cascading-materialized-views.md"} | [
0.014372388832271099,
-0.06848099827766418,
0.009548977948725224,
0.0752609595656395,
-0.061849042773246765,
-0.05808163061738014,
0.07056303322315216,
0.014648672193288803,
-0.007795768324285746,
-0.00469801714643836,
0.03240792453289032,
-0.10490234196186066,
0.03469425439834595,
-0.0537... |
95f6dfff-286e-4eef-b09c-41a18f7b5627 | slug: /guides/developer/on-the-fly-mutations
sidebar_label: 'On-the-fly mutation'
title: 'On-the-fly Mutations'
keywords: ['On-the-fly mutation']
description: 'Provides a description of on-the-fly mutations'
doc_type: 'guide'
On-the-fly mutations {#on-the-fly-mutations}
When on-the-fly mutations are enabled, updated rows are marked as updated immediately and subsequent
SELECT
queries will automatically return with the changed values. When on-the-fly mutations are not enabled, you may have to wait for your mutations to be applied via a background process to see the changed values.
On-the-fly mutations can be enabled for
MergeTree
-family tables by enabling the query-level setting
apply_mutations_on_fly
.
sql
SET apply_mutations_on_fly = 1;
Example {#example}
Let's create a table and run some mutations:
```sql
CREATE TABLE test_on_fly_mutations (id UInt64, v String)
ENGINE = MergeTree ORDER BY id;
-- Disable background materialization of mutations to showcase
-- default behavior when on-the-fly mutations are not enabled
SYSTEM STOP MERGES test_on_fly_mutations;
SET mutations_sync = 0;
-- Insert some rows in our new table
INSERT INTO test_on_fly_mutations VALUES (1, 'a'), (2, 'b'), (3, 'c');
-- Update the values of the rows
ALTER TABLE test_on_fly_mutations UPDATE v = 'd' WHERE id = 1;
ALTER TABLE test_on_fly_mutations DELETE WHERE v = 'd';
ALTER TABLE test_on_fly_mutations UPDATE v = 'e' WHERE id = 2;
ALTER TABLE test_on_fly_mutations DELETE WHERE v = 'e';
```
Let's check the result of the updates via a
SELECT
query:
```sql
-- Explicitly disable on-the-fly-mutations
SET apply_mutations_on_fly = 0;
SELECT id, v FROM test_on_fly_mutations ORDER BY id;
```
Note that the values of the rows have not yet been updated when we query the new table:
response
ββidββ¬βvββ
β 1 β a β
β 2 β b β
β 3 β c β
ββββββ΄ββββ
Let's now see what happens when we enable on-the-fly mutations:
```sql
-- Enable on-the-fly mutations
SET apply_mutations_on_fly = 1;
SELECT id, v FROM test_on_fly_mutations ORDER BY id;
```
The
SELECT
query now returns the correct result immediately, without having to wait for the mutations to be applied:
response
ββidββ¬βvββ
β 3 β c β
ββββββ΄ββββ
Performance impact {#performance-impact}
When on-the-fly mutations are enabled, mutations are not materialized immediately but will only be applied during
SELECT
queries. However, please note that mutations are still being materialized asynchronously in the background, which is a heavy process.
If the number of submitted mutations constantly exceeds the number of mutations that are processed in the background over some time interval, the queue of unmaterialized mutations that have to be applied will continue to grow. This will result in the eventual degradation of
SELECT
query performance. | {"source_file": "on-fly-mutations.md"} | [
0.0064336396753787994,
-0.028670595958828926,
-0.0018442199798300862,
0.031299859285354614,
-0.00010036824096459895,
-0.06883173435926437,
0.0019118757918477058,
0.014949381351470947,
-0.023790471255779266,
0.002989289117977023,
0.051775090396404266,
0.005307989194989204,
-0.0004199210088700... |
dc2c38fb-3cb6-4c4f-9e95-1761c25496c8 | We suggest enabling the setting
apply_mutations_on_fly
together with other
MergeTree
-level settings such as
number_of_mutations_to_throw
and
number_of_mutations_to_delay
to restrict the infinite growth of unmaterialized mutations.
Support for subqueries and non-deterministic functions {#support-for-subqueries-and-non-deterministic-functions}
On-the-fly mutations have limited support with subqueries and non-deterministic functions. Only scalar subqueries with a result that have a reasonable size (controlled by the setting
mutations_max_literal_size_to_replace
) are supported. Only constant non-deterministic functions are supported (e.g. the function
now()
).
These behaviours are controlled by the following settings:
mutations_execute_nondeterministic_on_initiator
- if true, non-deterministic functions are executed on the initiator replica and are replaced as literals in
UPDATE
and
DELETE
queries. Default value:
false
.
mutations_execute_subqueries_on_initiator
- if true, scalar subqueries are executed on the initiator replica and are replaced as literals in
UPDATE
and
DELETE
queries. Default value:
false
.
mutations_max_literal_size_to_replace
- The maximum size of serialized literals in bytes to replace in
UPDATE
and
DELETE
queries. Default value:
16384
(16 KiB). | {"source_file": "on-fly-mutations.md"} | [
-0.0538400337100029,
0.0014241208555176854,
0.013027518056333065,
-0.009741267189383507,
0.004636503290385008,
-0.11961019039154053,
-0.043929412961006165,
0.019544692710042,
0.02658836357295513,
0.021611010655760765,
0.034658968448638916,
-0.005437992978841066,
0.029811471700668335,
-0.05... |
c73f9919-b86e-48b9-8392-f2cb17ab2d83 | slug: /guides/developer/deduplication
sidebar_label: 'Deduplication strategies'
sidebar_position: 3
description: 'Use deduplication when you need to perform frequent upserts, updates and deletes.'
title: 'Deduplication Strategies'
keywords: ['deduplication strategies', 'data deduplication', 'upserts', 'updates and deletes', 'developer guide']
doc_type: 'guide'
import deduplication from '@site/static/images/guides/developer/de_duplication.png';
import Image from '@theme/IdealImage';
Deduplication strategies
Deduplication
refers to the process of
removing duplicate rows of a dataset
. In an OLTP database, this is done easily because each row has a unique primary key-but at the cost of slower inserts. Every inserted row needs to first be searched for and, if found, needs to be replaced.
ClickHouse is built for speed when it comes to data insertion. The storage files are immutable and ClickHouse does not check for an existing primary key before inserting a row-so deduplication involves a bit more effort. This also means that deduplication is not immediate-it is
eventual
, which has a few side effects:
At any moment in time your table can still have duplicates (rows with the same sorting key)
The actual removal of duplicate rows occurs during the merging of parts
Your queries need to allow for the possibility of duplicates
|||
|------|----|
|
|ClickHouse provides free training on deduplication and many other topics. The [Deleting and Updating Data training module](https://learn.clickhouse.com/visitor_catalog_class/show/1328954/?utm_source=clickhouse&utm_medium=docs) is a good place to start.|
Options for deduplication {#options-for-deduplication}
Deduplication is implemented in ClickHouse using the following table engines:
ReplacingMergeTree
table engine: with this table engine, duplicate rows with the same sorting key are removed during merges.
ReplacingMergeTree
is a good option for emulating upsert behavior (where you want queries to return the last row inserted).
Collapsing rows: the
CollapsingMergeTree
and
VersionedCollapsingMergeTree
table engines use a logic where an existing row is "canceled" and a new row is inserted. They are more complex to implement than
ReplacingMergeTree
, but your queries and aggregations can be simpler to write without worrying about whether or not data has been merged yet. These two table engines are useful when you need to update data frequently.
We walk through both of these techniques below. For more details, check out our free on-demand
Deleting and Updating Data training module
.
Using ReplacingMergeTree for Upserts {#using-replacingmergetree-for-upserts}
Let's look at a simple example where a table contains Hacker News comments with a views column representing the number of times a comment was viewed. Suppose we insert a new row when an article is published and upsert a new row once a day with the total number of views if the value increases: | {"source_file": "deduplication.md"} | [
-0.12750567495822906,
0.012800696305930614,
-0.009150615893304348,
0.010783783160150051,
0.03259008750319481,
-0.11629161238670349,
-0.013928730972111225,
0.003435147227719426,
0.04253522679209709,
-0.0002763642405625433,
0.13068929314613342,
0.10899019241333008,
0.07941406965255737,
-0.12... |
0947814a-275f-4fad-8a1d-fb25410693d3 | sql
CREATE TABLE hackernews_rmt (
id UInt32,
author String,
comment String,
views UInt64
)
ENGINE = ReplacingMergeTree
PRIMARY KEY (author, id)
Let's insert two rows:
sql
INSERT INTO hackernews_rmt VALUES
(1, 'ricardo', 'This is post #1', 0),
(2, 'ch_fan', 'This is post #2', 0)
To update the
views
column, insert a new row with the same primary key (notice the new values of the
views
column):
sql
INSERT INTO hackernews_rmt VALUES
(1, 'ricardo', 'This is post #1', 100),
(2, 'ch_fan', 'This is post #2', 200)
The table now has 4 rows:
sql
SELECT *
FROM hackernews_rmt
response
ββidββ¬βauthorβββ¬βcommentββββββββββ¬βviewsββ
β 2 β ch_fan β This is post #2 β 0 β
β 1 β ricardo β This is post #1 β 0 β
ββββββ΄ββββββββββ΄ββββββββββββββββββ΄ββββββββ
ββidββ¬βauthorβββ¬βcommentββββββββββ¬βviewsββ
β 2 β ch_fan β This is post #2 β 200 β
β 1 β ricardo β This is post #1 β 100 β
ββββββ΄ββββββββββ΄ββββββββββββββββββ΄ββββββββ
The separate boxes above in the output demonstrate the two parts behind-the-scenes - this data has not been merged yet, so the duplicate rows have not been removed yet. Let's use the
FINAL
keyword in the
SELECT
query, which results in a logical merging of the query result:
sql
SELECT *
FROM hackernews_rmt
FINAL
response
ββidββ¬βauthorβββ¬βcommentββββββββββ¬βviewsββ
β 2 β ch_fan β This is post #2 β 200 β
β 1 β ricardo β This is post #1 β 100 β
ββββββ΄ββββββββββ΄ββββββββββββββββββ΄ββββββββ
The result only has 2 rows, and the last row inserted is the row that gets returned.
:::note
Using
FINAL
works okay if you have a small amount of data. If you are dealing with a large amount of data,
using
FINAL
is probably not the best option. Let's discuss a better option for
finding the latest value of a column.
:::
Avoiding FINAL {#avoiding-final}
Let's update the
views
column again for both unique rows:
sql
INSERT INTO hackernews_rmt VALUES
(1, 'ricardo', 'This is post #1', 150),
(2, 'ch_fan', 'This is post #2', 250)
The table has 6 rows now, because an actual merge hasn't happened yet (only the query-time merge when we used
FINAL
).
sql
SELECT *
FROM hackernews_rmt
response
ββidββ¬βauthorβββ¬βcommentββββββββββ¬βviewsββ
β 2 β ch_fan β This is post #2 β 200 β
β 1 β ricardo β This is post #1 β 100 β
ββββββ΄ββββββββββ΄ββββββββββββββββββ΄ββββββββ
ββidββ¬βauthorβββ¬βcommentββββββββββ¬βviewsββ
β 2 β ch_fan β This is post #2 β 0 β
β 1 β ricardo β This is post #1 β 0 β
ββββββ΄ββββββββββ΄ββββββββββββββββββ΄ββββββββ
ββidββ¬βauthorβββ¬βcommentββββββββββ¬βviewsββ
β 2 β ch_fan β This is post #2 β 250 β
β 1 β ricardo β This is post #1 β 150 β
ββββββ΄ββββββββββ΄ββββββββββββββββββ΄ββββββββ
Instead of using
FINAL
, let's use some business logic - we know that the
views
column is always increasing, so we can select the row with the largest value using the
max
function after grouping by the desired columns: | {"source_file": "deduplication.md"} | [
-0.008830221369862556,
-0.04090164601802826,
-0.05485314875841141,
0.04667162895202637,
-0.03492244705557823,
-0.020624011754989624,
0.014889887534081936,
-0.06314469128847122,
0.02545202150940895,
0.0510576032102108,
0.08484233915805817,
-0.04900548234581947,
0.11248482018709183,
-0.10966... |
f2233308-0778-4df4-b3b3-982ea4c4eb60 | sql
SELECT
id,
author,
comment,
max(views)
FROM hackernews_rmt
GROUP BY (id, author, comment)
response
ββidββ¬βauthorβββ¬βcommentββββββββββ¬βmax(views)ββ
β 2 β ch_fan β This is post #2 β 250 β
β 1 β ricardo β This is post #1 β 150 β
ββββββ΄ββββββββββ΄ββββββββββββββββββ΄βββββββββββββ
Grouping as shown in the query above can actually be more efficient (in terms of query performance) than using the
FINAL
keyword.
Our
Deleting and Updating Data training module
expands on this example, including how to use a
version
column with
ReplacingMergeTree
.
Using CollapsingMergeTree for updating columns frequently {#using-collapsingmergetree-for-updating-columns-frequently}
Updating a column involves deleting an existing row and replacing it with new values. As you have already seen, this type of mutation in ClickHouse happens
eventually
- during merges. If you have a lot of rows to update, it can actually be more efficient to avoid
ALTER TABLE..UPDATE
and instead just insert the new data alongside the existing data. We could add a column that denotes whether or not the data is stale or new... and there is actually a table engine that already implements this behavior very nicely, especially considering that it deletes the stale data automatically for you. Let's see how it works.
Suppose we track the number of views that a Hacker News comment has using an external system and every few hours, we push the data into ClickHouse. We want the old rows deleted and the new rows to represent the new state of each Hacker News comment. We can use a
CollapsingMergeTree
to implement this behavior.
Let's define a table to store the number of views:
sql
CREATE TABLE hackernews_views (
id UInt32,
author String,
views UInt64,
sign Int8
)
ENGINE = CollapsingMergeTree(sign)
PRIMARY KEY (id, author)
Notice the
hackernews_views
table has an
Int8
column named sign which is referred to as the
sign
column. The name of the sign column is arbitrary, but the
Int8
data type is required, and notice the column name was passed in to the constructor of the
CollapsingMergeTree
table.
What is the sign column of a
CollapsingMergeTree
table? It represents the
state
of the row, and the sign column can only be 1 or -1. Here is how it works:
If two rows have the same primary key (or sort order if that is different than the primary key), but different values of the sign column, then the last row inserted with a +1 becomes the state row and the other rows cancel each other
Rows that cancel each other out are deleted during merges
Rows that do not have a matching pair are kept
Let's add a row to the
hackernews_views
table. Since it is the only row for this primary key, we set its state to 1:
sql
INSERT INTO hackernews_views VALUES
(123, 'ricardo', 0, 1) | {"source_file": "deduplication.md"} | [
-0.07006030529737473,
-0.07934335619211197,
-0.002115731593221426,
0.07973421365022659,
0.03309326618909836,
-0.05495031177997589,
0.009582418017089367,
-0.07289929687976837,
0.02406879886984825,
0.04146682471036911,
0.08606671541929245,
0.05120735615491867,
0.05428817868232727,
-0.1285045... |
cd7f7d2e-c5ce-4557-a440-c51466008525 | Let's add a row to the
hackernews_views
table. Since it is the only row for this primary key, we set its state to 1:
sql
INSERT INTO hackernews_views VALUES
(123, 'ricardo', 0, 1)
Now suppose we want to change the views column. You insert two rows: one that cancels the existing row, and one that contains the new state of the row:
sql
INSERT INTO hackernews_views VALUES
(123, 'ricardo', 0, -1),
(123, 'ricardo', 150, 1)
The table now has 3 rows with the primary key
(123, 'ricardo')
:
sql
SELECT *
FROM hackernews_views
response
βββidββ¬βauthorβββ¬βviewsββ¬βsignββ
β 123 β ricardo β 0 β -1 β
β 123 β ricardo β 150 β 1 β
βββββββ΄ββββββββββ΄ββββββββ΄βββββββ
βββidββ¬βauthorβββ¬βviewsββ¬βsignββ
β 123 β ricardo β 0 β 1 β
βββββββ΄ββββββββββ΄ββββββββ΄βββββββ
Notice adding
FINAL
returns the current state row:
sql
SELECT *
FROM hackernews_views
FINAL
response
βββidββ¬βauthorβββ¬βviewsββ¬βsignββ
β 123 β ricardo β 150 β 1 β
βββββββ΄ββββββββββ΄ββββββββ΄βββββββ
But of course, using
FINAL
is not recommended for large tables.
:::note
The value passed in for the
views
column in our example is not really needed, nor does it have to match the current value of
views
of the old row. In fact, you can cancel a row with just the primary key and a -1:
sql
INSERT INTO hackernews_views(id, author, sign) VALUES
(123, 'ricardo', -1)
:::
Real-time updates from multiple threads {#real-time-updates-from-multiple-threads}
With a
CollapsingMergeTree
table, rows cancel each other using a sign column, and the state of a row is determined by the last row inserted. But this can be problematic if you are inserting rows from different threads where rows can be inserted out of order. Using the "last" row does not work in this situation.
This is where
VersionedCollapsingMergeTree
comes in handy - it collapses rows just like
CollapsingMergeTree
, but instead of keeping the last row inserted, it keeps the row with the highest value of a version column that you specify.
Let's look at an example. Suppose we want to track the number of views of our Hacker News comments, and the data is updated frequently. We want reporting to use the latest values without forcing or waiting for merges. We start with a table similar to
CollapsedMergeTree
, except we add a column to store the version of the state of the row:
sql
CREATE TABLE hackernews_views_vcmt (
id UInt32,
author String,
views UInt64,
sign Int8,
version UInt32
)
ENGINE = VersionedCollapsingMergeTree(sign, version)
PRIMARY KEY (id, author)
Notice the table uses
VersionsedCollapsingMergeTree
as the engine and passes in the
sign column
and a
version column
. Here is the table works:
It deletes each pair of rows that have the same primary key and version and different sign
The order that rows were inserted does not matter | {"source_file": "deduplication.md"} | [
-0.05792476609349251,
-0.06037087365984917,
-0.04430289939045906,
0.012480494566261768,
0.005737443920224905,
-0.037645500153303146,
0.06770548969507217,
-0.07045257836580276,
0.04183683544397354,
0.07632526010274887,
0.02217678353190422,
-0.04165824130177498,
0.06835033744573593,
-0.04845... |
f061b5bb-bd41-4d49-afd9-a40faef22a87 | It deletes each pair of rows that have the same primary key and version and different sign
The order that rows were inserted does not matter
Note that if the version column is not a part of the primary key, ClickHouse adds it to the primary key implicitly as the last field
You use the same type of logic when writing queries - group by the primary key and use clever logic to avoid rows that have been canceled but not deleted yet. Let's add some rows to the
hackernews_views_vcmt
table:
sql
INSERT INTO hackernews_views_vcmt VALUES
(1, 'ricardo', 0, 1, 1),
(2, 'ch_fan', 0, 1, 1),
(3, 'kenny', 0, 1, 1)
Now we update two of the rows and delete one of them. To cancel a row, be sure to include the prior version number (since it is a part of the primary key):
sql
INSERT INTO hackernews_views_vcmt VALUES
(1, 'ricardo', 0, -1, 1),
(1, 'ricardo', 50, 1, 2),
(2, 'ch_fan', 0, -1, 1),
(3, 'kenny', 0, -1, 1),
(3, 'kenny', 1000, 1, 2)
We will run the same query as before that cleverly adds and subtracts values based on the sign column:
sql
SELECT
id,
author,
sum(views * sign)
FROM hackernews_views_vcmt
GROUP BY (id, author)
HAVING sum(sign) > 0
ORDER BY id ASC
The result is two rows:
response
ββidββ¬βauthorβββ¬βsum(multiply(views, sign))ββ
β 1 β ricardo β 50 β
β 3 β kenny β 1000 β
ββββββ΄ββββββββββ΄βββββββββββββββββββββββββββββ
Let's force a table merge:
sql
OPTIMIZE TABLE hackernews_views_vcmt
There should only be two rows in the result:
sql
SELECT *
FROM hackernews_views_vcmt
response
ββidββ¬βauthorβββ¬βviewsββ¬βsignββ¬βversionββ
β 1 β ricardo β 50 β 1 β 2 β
β 3 β kenny β 1000 β 1 β 2 β
ββββββ΄ββββββββββ΄ββββββββ΄βββββββ΄ββββββββββ
A
VersionedCollapsingMergeTree
table is quite handy when you want to implement deduplication while inserting rows from multiple clients and/or threads.
Why aren't my rows being deduplicated? {#why-arent-my-rows-being-deduplicated}
One reason inserted rows may not be deduplicated is if you are using a non-idempotent function or expression in your
INSERT
statement. For example, if you are inserting rows with the column
createdAt DateTime64(3) DEFAULT now()
, your rows are guaranteed to be unique because each row will have a unique default value for the
createdAt
column. The MergeTree / ReplicatedMergeTree table engine will not know to deduplicate the rows as each inserted row will generate a unique checksum.
In this case, you can specify your own
insert_deduplication_token
for each batch of rows to ensure that multiple inserts of the same batch will not result in the same rows being re-inserted. Please see the
documentation on
insert_deduplication_token
for more details about how to use this setting. | {"source_file": "deduplication.md"} | [
-0.02496792934834957,
-0.05188893899321556,
-0.006318721454590559,
0.0057354941964149475,
0.021375365555286407,
-0.049832895398139954,
0.03593096509575844,
-0.1596279889345169,
0.09147835522890091,
0.04927985370159149,
0.09572503715753555,
0.05702528357505798,
0.1072850376367569,
-0.142132... |
091779cb-dfc3-443d-a6fe-4a2e218e2c65 | slug: /guides/developer/overview
sidebar_label: 'Advanced guides overview'
description: 'Overview of the advanced guides'
title: 'Advanced Guides'
keywords: ['ClickHouse advanced guides', 'developer guides', 'query optimization', 'materialized views', 'deduplication', 'time series', 'query execution']
doc_type: 'guide'
Advanced guides
This section contains the following advanced guides: | {"source_file": "index.md"} | [
-0.003765841480344534,
-0.033834703266620636,
-0.010881945490837097,
0.04289025440812111,
0.019334839656949043,
-0.03742704167962074,
-0.012271514162421227,
0.06569871306419373,
-0.08895357698202133,
0.02697325497865677,
0.008823845535516739,
0.0796399936079979,
-0.008018823340535164,
-0.0... |
e541bc6f-a107-4373-816a-e4c6b9238aaf | | Guide | Description |
|------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Alternative Query Languages
| A guide on alternative supported dialects and how to use them. Provides examples of queries in each dialect. |
|
Cascading Materialized Views
| A guide on how to create Materialized Views and cascade them together, combine multiple source tables into a single destination table. Contains an example of using cascading materialized views to aggregate data by month and year for a group of domain names. |
|
Debugging memory issues
| A guide on how to debug memory issues within ClickHouse. |
|
Deduplicating Inserts on Retries
| A guide on how to handle situations where you might retry failed inserts. |
|
Deduplication strategies
| A guide which dives into data deduplication, a technique for removing duplicate rows from your database. Explains differences from primary key-based deduplication in OLTP systems, ClickHouse's approach to deduplication and how to handle duplicate data scenarios within your ClickHouse queries. |
| | {"source_file": "index.md"} | [
-0.0489174909889698,
0.05097671225667,
-0.04641375318169594,
-0.026167131960392,
-0.03592504560947418,
0.04320058226585388,
-0.0045058587566018105,
0.014410771429538727,
-0.01142040453851223,
-0.06388410180807114,
0.05868655815720558,
0.02183864265680313,
-0.029123876243829727,
-0.06183888... |
3374f1aa-e848-41d1-8cfc-abc582485f90 | |
Filling gaps in time-series data
| A guide which provides insights into ClickHouse's capabilities for handling time-series data, including techniques for filling gaps in data to create a more complete and continuous representation of time-series information. |
|
Manage Data with TTL (Time-to-live)
| A guide discussing how to use the
WITH FILL
clause to fill gaps in time-series data. It covers how to fill gaps with 0 values, how to specify a starting point for filling gaps, how to fill gaps up to a specific end point, and how to interpolate values for cumulative calculations. |
|
Stored procedures & query parameters
| A guide explaining that ClickHouse does not support traditional stored procedures, and provides recommended alternatives including User-Defined Functions (UDFs), parameterized views, materialized views, and external orchestration. Also covers query parameters for safe parameterized queries (similar to prepared statements). |
|
Understanding query execution with the Analyzer
| A guide which demystifies ClickHouse query execution by introducing the analyzer tool. It explains how the analyzer breaks down a query into a series of steps, allowing you to visualize and troubleshoot the entire execution process for optimal performance. |
|
Using JOINs in ClickHouse
| A guide that simplifies joining tables in ClickHouse. It covers different join types (
INNER
,
LEFT
,
RIGHT
, etc.), explores best practices for efficient joins (like placing smaller tables on the right), and provides insights on ClickHouse's internal join algorithms to help you optimize your queries for complex data relationships. | | {"source_file": "index.md"} | [
-0.07709024846553802,
-0.002676451113075018,
-0.059487972408533096,
0.05924112722277641,
-0.05869435891509056,
-0.056976623833179474,
-0.03364238888025284,
0.026147961616516113,
-0.013521336950361729,
-0.02728315070271492,
-0.021179668605327606,
-0.03318547084927559,
-0.0005857395008206367,
... |
3fe2fa2d-ee7e-4960-a4df-908b4528f2a5 | slug: /guides/developer/lightweight-delete
title: 'Lightweight Delete'
keywords: ['lightweight delete']
description: 'Provides an overview of lightweight deletes in ClickHouse'
doc_type: 'reference'
import Content from '@site/docs/sql-reference/statements/delete.md'; | {"source_file": "lightweight-delete.md"} | [
-0.031236937269568443,
-0.004036429338157177,
-0.044891875237226486,
0.022028829902410507,
0.05726628005504608,
-0.07581993192434311,
0.0547761432826519,
-0.056266333907842636,
-0.008499984629452229,
-0.00504929618909955,
0.01859883777797222,
0.0092776445671916,
0.09251432120800018,
-0.029... |
49258673-c614-4547-b3c0-077b6a9fb216 | slug: /guides/replacing-merge-tree
title: 'ReplacingMergeTree'
description: 'Using the ReplacingMergeTree engine in ClickHouse'
keywords: ['replacingmergetree', 'inserts', 'deduplication']
doc_type: 'guide'
import postgres_replacingmergetree from '@site/static/images/migrations/postgres-replacingmergetree.png';
import Image from '@theme/IdealImage';
While transactional databases are optimized for transactional update and delete workloads, OLAP databases offer reduced guarantees for such operations. Instead, they optimize for immutable data inserted in batches for the benefit of significantly faster analytical queries. While ClickHouse offers update operations through mutations, as well as a lightweight means of deleting rows, its column-orientated structure means these operations should be scheduled with care, as described above. These operations are handled asynchronously, processed with a single thread, and require (in the case of updates) data to be rewritten on disk. They should thus not be used for high numbers of small changes.
In order to process a stream of update and delete rows while avoiding the above usage patterns, we can use the ClickHouse table engine ReplacingMergeTree.
Automatic upserts of inserted rows {#automatic-upserts-of-inserted-rows}
The
ReplacingMergeTree table engine
allows update operations to be applied to rows, without needing to use inefficient
ALTER
or
DELETE
statements, by offering the ability for users to insert multiple copies of the same row and denote one as the latest version. A background process, in turn, asynchronously removes older versions of the same row, efficiently imitating an update operation through the use of immutable inserts.
This relies on the ability of the table engine to identify duplicate rows. This is achieved using the
ORDER BY
clause to determine uniqueness, i.e., if two rows have the same values for the columns specified in the
ORDER BY
, they are considered duplicates. A
version
column, specified when defining the table, allows the latest version of a row to be retained when two rows are identified as duplicates i.e. the row with the highest version value is kept.
We illustrate this process in the example below. Here, the rows are uniquely identified by the A column (the
ORDER BY
for the table). We assume these rows have been inserted as two batches, resulting in the formation of two data parts on disk. Later, during an asynchronous background process, these parts are merged together.
ReplacingMergeTree additionally allows a deleted column to be specified. This can contain either 0 or 1, where a value of 1 indicates that the row (and its duplicates) has been deleted and zero is used otherwise.
Note: Deleted rows will not be removed at merge time.
During this process, the following occurs during part merging: | {"source_file": "replacing-merge-tree.md"} | [
-0.052205827087163925,
0.03965199738740921,
0.025487767532467842,
0.016856174916028976,
0.042832840234041214,
-0.14940595626831055,
-0.005339493043720722,
-0.017881445586681366,
0.039744310081005096,
0.02291051857173443,
0.04818999767303467,
0.10388172417879105,
0.00911235436797142,
-0.116... |
ef93b053-7588-4296-a8af-c36a35012df7 | During this process, the following occurs during part merging:
The row identified by the value 1 for column A has both an update row with version 2 and a delete row with version 3 (and a deleted column value of 1). The latest row, marked as deleted, is therefore retained.
The row identified by the value 2 for column A has two update rows. The latter row is retained with a value of 6 for the price column.
The row identified by the value 3 for column A has a row with version 1 and a delete row with version 2. This delete row is retained.
As a result of this merge process, we have four rows representing the final state:
Note that deleted rows are never removed. They can be forcibly deleted with an
OPTIMIZE table FINAL CLEANUP
. This requires the experimental setting
allow_experimental_replacing_merge_with_cleanup=1
. This should only be issued under the following conditions:
You can be sure that no rows with old versions (for those that are being deleted with the cleanup) will be inserted after the operation is issued. If these are inserted, they will be incorrectly retained, as the deleted rows will no longer be present.
Ensure all replicas are in sync prior to issuing the cleanup. This can be achieved with the command:
sql
SYSTEM SYNC REPLICA table
We recommend pausing inserts once (1) is guaranteed and until this command and the subsequent cleanup are complete.
Handling deletes with the ReplacingMergeTree is only recommended for tables with a low to moderate number of deletes (less than 10%) unless periods can be scheduled for cleanup with the above conditions.
Tip: Users may also be able to issue
OPTIMIZE FINAL CLEANUP
against selective partitions no longer subject to changes.
Choosing a primary/deduplication key {#choosing-a-primarydeduplication-key}
Above, we highlighted an important additional constraint that must also be satisfied in the case of the ReplacingMergeTree: the values of columns of the
ORDER BY
uniquely identify a row across changes. If migrating from a transactional database like Postgres, the original Postgres primary key should thus be included in the Clickhouse
ORDER BY
clause. | {"source_file": "replacing-merge-tree.md"} | [
-0.0654817521572113,
0.02620331384241581,
-0.002052066847681999,
-0.042872071266174316,
0.02795260399580002,
-0.09413822740316391,
-0.03603648394346237,
-0.07669828087091446,
0.06256619095802307,
0.059584327042102814,
0.08571342378854752,
0.08151900768280029,
0.015152668580412865,
-0.05725... |
0eb7791d-08a9-489b-a98e-2d6960d908a2 | Users of ClickHouse will be familiar with choosing the columns in their tables
ORDER BY
clause to
optimize for query performance
. Generally, these columns should be selected based on your
frequent queries and listed in order of increasing cardinality
. Importantly, the ReplacingMergeTree imposes an additional constraint - these columns must be immutable, i.e., if replicating from Postgres, only add columns to this clause if they do not change in the underlying Postgres data. While other columns can change, these are required to be consistent for unique row identification.
For analytical workloads, the Postgres primary key is generally of little use as users will rarely perform point row lookups. Given we recommend that columns be ordered in order of increasing cardinality, as well as the fact that matches on
columns listed earlier in the ORDER BY will usually be faster
, the Postgres primary key should be appended to the end of the
ORDER BY
(unless it has analytical value). In the case that multiple columns form a primary key in Postgres, they should be appended to the
ORDER BY
, respecting cardinality and the likelihood of query value. Users may also wish to generate a unique primary key using a concatenation of values via a
MATERIALIZED
column.
Consider the posts table from the Stack Overflow dataset.
sql
CREATE TABLE stackoverflow.posts_updateable
(
`Version` UInt32,
`Deleted` UInt8,
`Id` Int32 CODEC(Delta(4), ZSTD(1)),
`PostTypeId` Enum8('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
`AcceptedAnswerId` UInt32,
`CreationDate` DateTime64(3, 'UTC'),
`Score` Int32,
`ViewCount` UInt32 CODEC(Delta(4), ZSTD(1)),
`Body` String,
`OwnerUserId` Int32,
`OwnerDisplayName` String,
`LastEditorUserId` Int32,
`LastEditorDisplayName` String,
`LastEditDate` DateTime64(3, 'UTC') CODEC(Delta(8), ZSTD(1)),
`LastActivityDate` DateTime64(3, 'UTC'),
`Title` String,
`Tags` String,
`AnswerCount` UInt16 CODEC(Delta(2), ZSTD(1)),
`CommentCount` UInt8,
`FavoriteCount` UInt8,
`ContentLicense` LowCardinality(String),
`ParentId` String,
`CommunityOwnedDate` DateTime64(3, 'UTC'),
`ClosedDate` DateTime64(3, 'UTC')
)
ENGINE = ReplacingMergeTree(Version, Deleted)
PARTITION BY toYear(CreationDate)
ORDER BY (PostTypeId, toDate(CreationDate), CreationDate, Id)
We use an
ORDER BY
key of
(PostTypeId, toDate(CreationDate), CreationDate, Id)
. The
Id
column, unique for each post, ensures rows can be deduplicated. A
Version
and
Deleted
column are added to the schema as required.
Querying ReplacingMergeTree {#querying-replacingmergetree} | {"source_file": "replacing-merge-tree.md"} | [
-0.031238513067364693,
0.006402413826435804,
-0.05216769501566887,
-0.06005037948489189,
-0.0377996601164341,
0.005713354330509901,
-0.020822172984480858,
-0.050445180386304855,
0.06230078637599945,
0.012840901501476765,
-0.03445201739668846,
0.11141417920589447,
-0.029435109347105026,
-0.... |
c918b1fe-6bf5-4234-a38d-25a874ccd1d1 | Querying ReplacingMergeTree {#querying-replacingmergetree}
At merge time, the ReplacingMergeTree identifies duplicate rows, using the values of the
ORDER BY
columns as a unique identifier, and either retains only the highest version or removes all duplicates if the latest version indicates a delete. This, however, offers eventual correctness only - it does not guarantee rows will be deduplicated, and you should not rely on it. Queries can, therefore, produce incorrect answers due to update and delete rows being considered in queries.
To obtain correct answers, users will need to complement background merges with query time deduplication and deletion removal. This can be achieved using the
FINAL
operator.
Consider the posts table above. We can use the normal method of loading this dataset but specify a deleted and version column in addition to values 0. For example purposes, we load 10000 rows only.
```sql
INSERT INTO stackoverflow.posts_updateable SELECT 0 AS Version, 0 AS Deleted, *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/*.parquet') WHERE AnswerCount > 0 LIMIT 10000
0 rows in set. Elapsed: 1.980 sec. Processed 8.19 thousand rows, 3.52 MB (4.14 thousand rows/s., 1.78 MB/s.)
```
Let's confirm the number of rows:
```sql
SELECT count() FROM stackoverflow.posts_updateable
ββcount()ββ
β 10000 β
βββββββββββ
1 row in set. Elapsed: 0.002 sec.
```
We now update our post-answer statistics. Rather than updating these values, we insert new copies of 5000 rows and add one to their version number (this means 150 rows will exist in the table). We can simulate this with a simple
INSERT INTO SELECT
:
```sql
INSERT INTO posts_updateable SELECT
Version + 1 AS Version,
Deleted,
Id,
PostTypeId,
AcceptedAnswerId,
CreationDate,
Score,
ViewCount,
Body,
OwnerUserId,
OwnerDisplayName,
LastEditorUserId,
LastEditorDisplayName,
LastEditDate,
LastActivityDate,
Title,
Tags,
AnswerCount,
CommentCount,
FavoriteCount,
ContentLicense,
ParentId,
CommunityOwnedDate,
ClosedDate
FROM posts_updateable --select 100 random rows
WHERE (Id % toInt32(floor(randUniform(1, 11)))) = 0
LIMIT 5000
0 rows in set. Elapsed: 4.056 sec. Processed 1.42 million rows, 2.20 GB (349.63 thousand rows/s., 543.39 MB/s.)
```
In addition, we delete 1000 random posts by reinserting the rows but with a deleted column value of 1. Again, simulating this can be simulated with a simple
INSERT INTO SELECT
. | {"source_file": "replacing-merge-tree.md"} | [
-0.059936895966529846,
0.010578393004834652,
0.025958169251680374,
-0.03867710381746292,
-0.008242875337600708,
-0.05930695682764053,
-0.02382981777191162,
-0.07807037234306335,
0.0009583979262970388,
0.03710689768195152,
0.06040076166391373,
0.06136803701519966,
0.03536796569824219,
-0.12... |
918fec80-58ba-4ce9-8a50-a4ffd07d2b8c | In addition, we delete 1000 random posts by reinserting the rows but with a deleted column value of 1. Again, simulating this can be simulated with a simple
INSERT INTO SELECT
.
```sql
INSERT INTO posts_updateable SELECT
Version + 1 AS Version,
1 AS Deleted,
Id,
PostTypeId,
AcceptedAnswerId,
CreationDate,
Score,
ViewCount,
Body,
OwnerUserId,
OwnerDisplayName,
LastEditorUserId,
LastEditorDisplayName,
LastEditDate,
LastActivityDate,
Title,
Tags,
AnswerCount + 1 AS AnswerCount,
CommentCount,
FavoriteCount,
ContentLicense,
ParentId,
CommunityOwnedDate,
ClosedDate
FROM posts_updateable --select 100 random rows
WHERE (Id % toInt32(floor(randUniform(1, 11)))) = 0 AND AnswerCount > 0
LIMIT 1000
0 rows in set. Elapsed: 0.166 sec. Processed 135.53 thousand rows, 212.65 MB (816.30 thousand rows/s., 1.28 GB/s.)
```
The result of the above operations will be 16,000 rows i.e. 10,000 + 5000 + 1000. The correct total here is, reality we should have only 1000 rows less than our original total i.e. 10,000 - 1000 = 9000.
```sql
SELECT count()
FROM posts_updateable
ββcount()ββ
β 10000 β
βββββββββββ
1 row in set. Elapsed: 0.002 sec.
```
Your results will vary here depending on the merges that have occurred. We can see the total here is different as we have duplicate rows. Applying
FINAL
to the table delivers the correct result.
```sql
SELECT count()
FROM posts_updateable
FINAL
ββcount()ββ
β 9000 β
βββββββββββ
1 row in set. Elapsed: 0.006 sec. Processed 11.81 thousand rows, 212.54 KB (2.14 million rows/s., 38.61 MB/s.)
Peak memory usage: 8.14 MiB.
```
FINAL performance {#final-performance}
The
FINAL
operator does have a small performance overhead on queries.
This will be most noticeable when queries are not filtering on primary key columns,
causing more data to be read and increasing the deduplication overhead. If users
filter on key columns using a
WHERE
condition, the data loaded and passed for
deduplication will be reduced.
If the
WHERE
condition does not use a key column, ClickHouse does not currently utilize the
PREWHERE
optimization when using
FINAL
. This optimization aims to reduce the rows read for non-filtered columns. Examples of emulating this
PREWHERE
and thus potentially improving performance can be found
here
.
Exploiting partitions with ReplacingMergeTree {#exploiting-partitions-with-replacingmergetree} | {"source_file": "replacing-merge-tree.md"} | [
-0.02594512328505516,
-0.03400156646966934,
-0.0338057279586792,
0.02216886729001999,
-0.0246892049908638,
-0.08018981665372849,
0.06473035365343094,
-0.07794394344091415,
-0.0027477224357426167,
0.08560275286436081,
0.08091172575950623,
0.021910592913627625,
0.1334012895822525,
-0.0902719... |
1b7d07d1-1160-427f-98a3-b7d90de5f912 | Exploiting partitions with ReplacingMergeTree {#exploiting-partitions-with-replacingmergetree}
Merging of data in ClickHouse occurs at a partition level. When using ReplacingMergeTree, we recommend users partition their table according to best practices, provided users can ensure this
partitioning key does not change for a row
. This will ensure updates pertaining to the same row will be sent to the same ClickHouse partition. You may reuse the same partition key as Postgres provided you adhere to the best practices outlined here.
Assuming this is the case, users can use the setting
do_not_merge_across_partitions_select_final=1
to improve
FINAL
query performance. This setting causes partitions to be merged and processed independently when using FINAL.
Consider the following posts table, where we use no partitioning:
``sql
CREATE TABLE stackoverflow.posts_no_part
(
Version
UInt32,
Deleted
UInt8,
Id` Int32 CODEC(Delta(4), ZSTD(1)),
...
)
ENGINE = ReplacingMergeTree
ORDER BY (PostTypeId, toDate(CreationDate), CreationDate, Id)
INSERT INTO stackoverflow.posts_no_part SELECT 0 AS Version, 0 AS Deleted, *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/*.parquet')
0 rows in set. Elapsed: 182.895 sec. Processed 59.82 million rows, 38.07 GB (327.07 thousand rows/s., 208.17 MB/s.)
```
To ensure
FINAL
is required to do some work, we update 1m rows - incrementing their
AnswerCount
by inserting duplicate rows.
sql
INSERT INTO posts_no_part SELECT Version + 1 AS Version, Deleted, Id, PostTypeId, AcceptedAnswerId, CreationDate, Score, ViewCount, Body, OwnerUserId, OwnerDisplayName, LastEditorUserId, LastEditorDisplayName, LastEditDate, LastActivityDate, Title, Tags, AnswerCount + 1 AS AnswerCount, CommentCount, FavoriteCount, ContentLicense, ParentId, CommunityOwnedDate, ClosedDate
FROM posts_no_part
LIMIT 1000000
Computing the sum of answers per year with
FINAL
:
```sql
SELECT toYear(CreationDate) AS year, sum(AnswerCount) AS total_answers
FROM posts_no_part
FINAL
GROUP BY year
ORDER BY year ASC
ββyearββ¬βtotal_answersββ
β 2008 β 371480 β
...
β 2024 β 127765 β
ββββββββ΄ββββββββββββββββ
17 rows in set. Elapsed: 2.338 sec. Processed 122.94 million rows, 1.84 GB (52.57 million rows/s., 788.58 MB/s.)
Peak memory usage: 2.09 GiB.
```
Repeating these same steps for a table partitioning by year, and repeating the above query with
do_not_merge_across_partitions_select_final=1
.
``sql
CREATE TABLE stackoverflow.posts_with_part
(
Version
UInt32,
Deleted
UInt8,
Id` Int32 CODEC(Delta(4), ZSTD(1)),
...
)
ENGINE = ReplacingMergeTree
PARTITION BY toYear(CreationDate)
ORDER BY (PostTypeId, toDate(CreationDate), CreationDate, Id)
// populate & update omitted
SELECT toYear(CreationDate) AS year, sum(AnswerCount) AS total_answers
FROM posts_with_part
FINAL
GROUP BY year
ORDER BY year ASC | {"source_file": "replacing-merge-tree.md"} | [
-0.036613814532756805,
-0.025604093447327614,
0.07196786999702454,
-0.0005009956657886505,
-0.005417706910520792,
-0.08106950670480728,
0.016566533595323563,
-0.013970299623906612,
0.006831466685980558,
-0.0313773937523365,
-0.01177193783223629,
0.04652980715036392,
-0.034254223108291626,
... |
4ad39e4f-58fc-4f04-92bc-bbe5e26e9cbd | // populate & update omitted
SELECT toYear(CreationDate) AS year, sum(AnswerCount) AS total_answers
FROM posts_with_part
FINAL
GROUP BY year
ORDER BY year ASC
ββyearββ¬βtotal_answersββ
β 2008 β 387832 β
β 2009 β 1165506 β
β 2010 β 1755437 β
...
β 2023 β 787032 β
β 2024 β 127765 β
ββββββββ΄ββββββββββββββββ
17 rows in set. Elapsed: 0.994 sec. Processed 64.65 million rows, 983.64 MB (65.02 million rows/s., 989.23 MB/s.)
```
As shown, partitioning has significantly improved query performance in this case by allowing the deduplication process to occur at a partition level in parallel.
Merge behavior considerations {#merge-behavior-considerations}
ClickHouse's merge selection mechanism goes beyond simple merging of parts. Below, we examine this behavior in the context of ReplacingMergeTree, including configuration options for enabling more aggressive merging of older data and considerations for larger parts.
Merge selection logic {#merge-selection-logic}
While merging aims to minimize the number of parts, it also balances this goal against the cost of write amplification. Consequently, some ranges of parts are excluded from merging if they would lead to excessive write amplification, based on internal calculations. This behavior helps prevent unnecessary resource usage and extends the lifespan of storage components.
Merging behavior on large parts {#merging-behavior-on-large-parts}
The ReplacingMergeTree engine in ClickHouse is optimized for managing duplicate rows by merging data parts, keeping only the latest version of each row based on a specified unique key. However, when a merged part reaches the max_bytes_to_merge_at_max_space_in_pool threshold, it will no longer be selected for further merging, even if min_age_to_force_merge_seconds is set. As a result, automatic merges can no longer be relied upon to remove duplicates that may accumulate with ongoing data insertion.
To address this, users can invoke OPTIMIZE FINAL to manually merge parts and remove duplicates. Unlike automatic merges, OPTIMIZE FINAL bypasses the max_bytes_to_merge_at_max_space_in_pool threshold, merging parts based solely on available resources, particularly disk space, until a single part remains in each partition. However, this approach can be memory-intensive on large tables and may require repeated execution as new data is added.
For a more sustainable solution that maintains performance, partitioning the table is recommended. This can help prevent data parts from reaching the maximum merge size and reduces the need for ongoing manual optimizations.
Partitioning and merging across partitions {#partitioning-and-merging-across-partitions} | {"source_file": "replacing-merge-tree.md"} | [
-0.0731828436255455,
-0.02331741899251938,
0.07369954884052277,
0.03334850072860718,
0.039100244641304016,
-0.05122555047273636,
-0.044729866087436676,
0.02649977058172226,
0.0005006061983294785,
0.008293206803500652,
0.020202316343784332,
0.00970777589827776,
-0.021376309916377068,
-0.055... |
15c55c44-cd7a-49cc-94cf-4d7eec319188 | Partitioning and merging across partitions {#partitioning-and-merging-across-partitions}
As discussed in Exploiting Partitions with ReplacingMergeTree, we recommend partitioning tables as a best practice. Partitioning isolates data for more efficient merges and avoids merging across partitions, particularly during query execution. This behavior is enhanced in versions from 23.12 onward: if the partition key is a prefix of the sorting key, merging across partitions is not performed at query time, leading to faster query performance.
Tuning merges for better query performance {#tuning-merges-for-better-query-performance}
By default, min_age_to_force_merge_seconds and min_age_to_force_merge_on_partition_only are set to 0 and false, respectively, disabling these features. In this configuration, ClickHouse will apply standard merging behavior without forcing merges based on partition age.
If a value for min_age_to_force_merge_seconds is specified, ClickHouse will ignore normal merging heuristics for parts older than the specified period. While this is generally only effective if the goal is to minimize the total number of parts, it can improve query performance in ReplacingMergeTree by reducing the number of parts needing merging at query time.
This behavior can be further tuned by setting min_age_to_force_merge_on_partition_only=true, requiring all parts in the partition to be older than min_age_to_force_merge_seconds for aggressive merging. This configuration allows older partitions to merge down to a single part over time, which consolidates data and maintains query performance.
Recommended settings {#recommended-settings}
:::warning
Tuning merge behavior is an advanced operation. We recommend consulting with ClickHouse support before enabling these settings in production workloads.
:::
In most cases, setting min_age_to_force_merge_seconds to a low valueβsignificantly less than the partition periodβis preferred. This minimizes the number of parts and prevents unnecessary merging at query time with the FINAL operator.
For example, consider a monthly partition that has already been merged into a single part. If a small, stray insert creates a new part within this partition, query performance can suffer because ClickHouse must read multiple parts until the merge completes. Setting min_age_to_force_merge_seconds can ensure these parts are merged aggressively, preventing a degradation in query performance. | {"source_file": "replacing-merge-tree.md"} | [
-0.0062463609501719475,
0.01291852816939354,
0.051268767565488815,
-0.026663828641176224,
0.0042399512603878975,
-0.061560023576021194,
-0.009750339202582836,
0.04667875915765762,
-0.021855246275663376,
-0.025959881022572517,
0.03224143385887146,
0.023398635908961296,
-0.012993451207876205,
... |
0c3cad6d-b379-4ff6-ae1a-0b288144a2c2 | slug: /guides/developer/mutations
sidebar_label: 'Updating and deleting data'
sidebar_position: 1
keywords: ['UPDATE', 'DELETE', 'mutations']
title: 'Updating and deleting ClickHouse data'
description: 'Describes how to perform update and delete operations in ClickHouse'
show_related_blogs: false
doc_type: 'guide'
Updating and deleting ClickHouse data with mutations
Although ClickHouse is geared toward high volume analytic workloads, it is possible in some situations to modify or
delete existing data. These operations are labeled "mutations" and are executed using the
ALTER TABLE
command.
:::tip
If you need to perform frequent updates, consider using
deduplication
in ClickHouse, which allows you to update
and/or delete rows without generating a mutation event. Alternatively, use
lightweight updates
or
lightweight deletes
:::
Updating data {#updating-data}
Use the
ALTER TABLE...UPDATE
command to update rows in a table:
sql
ALTER TABLE [<database>.]<table> UPDATE <column> = <expression> WHERE <filter_expr>
<expression>
is the new value for the column where the
<filter_expr>
is satisfied. The
<expression>
must be the same datatype as the column or be convertible to the same datatype using the
CAST
operator. The
<filter_expr>
should return a
UInt8
(zero or non-zero) value for each row of the data. Multiple
UPDATE <column>
statements can be combined in a single
ALTER TABLE
command separated by commas.
Examples
:
A mutation like this allows updating replacing
visitor_ids
with new ones using a dictionary lookup:
sql
ALTER TABLE website.clicks
UPDATE visitor_id = getDict('visitors', 'new_visitor_id', visitor_id)
WHERE visit_date < '2022-01-01'
Modifying multiple values in one command can be more efficient than multiple commands:
sql
ALTER TABLE website.clicks
UPDATE url = substring(url, position(url, '://') + 3), visitor_id = new_visit_id
WHERE visit_date < '2022-01-01'
Mutations can be executed
ON CLUSTER
for sharded tables:
sql
ALTER TABLE clicks ON CLUSTER main_cluster
UPDATE click_count = click_count / 2
WHERE visitor_id ILIKE '%robot%'
:::note
It is not possible to update columns that are part of the primary or sorting key.
:::
Deleting data {#deleting-data}
Use the
ALTER TABLE
command to delete rows:
sql
ALTER TABLE [<database>.]<table> DELETE WHERE <filter_expr>
The
<filter_expr>
should return a UInt8 value for each row of data.
Examples
Delete any records where a column is in an array of values:
sql
ALTER TABLE website.clicks DELETE WHERE visitor_id in (253, 1002, 4277)
What does this query alter?
sql
ALTER TABLE clicks ON CLUSTER main_cluster DELETE WHERE visit_date < '2022-01-02 15:00:00' AND page_id = '573'
:::note
To delete all of the data in a table, it is more efficient to use the command
TRUNCATE TABLE [<database].]<table>
command. This command can also be executed
ON CLUSTER
.
::: | {"source_file": "mutations.md"} | [
-0.057571109384298325,
-0.02093178778886795,
-0.050581883639097214,
0.01798774115741253,
-0.004350977949798107,
-0.07107648253440857,
0.02068960852921009,
-0.06954406201839447,
-0.02983159013092518,
0.0564797967672348,
0.10305273532867432,
0.04030295088887215,
0.10109582543373108,
-0.11484... |
8146f49b-2fc7-48ad-a25f-f2c2f38d8a7b | :::note
To delete all of the data in a table, it is more efficient to use the command
TRUNCATE TABLE [<database].]<table>
command. This command can also be executed
ON CLUSTER
.
:::
View the
DELETE
statement
docs page for more details.
Lightweight deletes {#lightweight-deletes}
Another option for deleting rows is to use the
DELETE FROM
command, which is referred to as a
lightweight delete
. The deleted rows are marked as deleted immediately and will be automatically filtered out of all subsequent queries, so you do not have to wait for a merging of parts or use the
FINAL
keyword. Cleanup of data happens asynchronously in the background.
sql
DELETE FROM [db.]table [ON CLUSTER cluster] [WHERE expr]
For example, the following query deletes all rows from the
hits
table where the
Title
column contains the text
hello
:
sql
DELETE FROM hits WHERE Title LIKE '%hello%';
A few notes about lightweight deletes:
- This feature is only available for the
MergeTree
table engine family.
- Lightweight deletes are asynchronous by default. Set
mutations_sync
equal to 1 to wait for one replica to process the statement, and set
mutations_sync
to 2 to wait for all replicas. | {"source_file": "mutations.md"} | [
0.04808153584599495,
-0.006846810225397348,
-0.01094068307429552,
0.06492727249860764,
0.01510726660490036,
-0.05492676421999931,
0.04879993945360184,
-0.06906653195619583,
0.04102916643023491,
0.05022177845239639,
0.0850309208035469,
0.04424113407731056,
0.057910554111003876,
-0.097683168... |
42ea04cf-f162-496d-9ef3-db07e3738663 | slug: /guides/developer/merge-table-function
sidebar_label: 'Merge table function'
title: 'Merge table function'
description: 'Query multiple tables at the same time.'
doc_type: 'reference'
keywords: ['merge', 'table function', 'query patterns', 'table engine', 'data access']
The
merge table function
lets us query multiple tables in parallel.
It does this by creating a temporary
Merge
table and derives this table's structure by taking a union of their columns and by deriving common types.
Setup tables {#setup-tables}
We're going to learn how to use this function with help from
Jeff Sackmann's tennis dataset
.
We're going to process CSV files that contain matches going back to the 1960s, but we'll create a slightly different schema for each decade.
We'll also add a couple of extra columns for the 1990s decade.
The import statements are shown below:
```sql
CREATE OR REPLACE TABLE atp_matches_1960s ORDER BY tourney_id AS
SELECT tourney_id, surface, winner_name, loser_name, winner_seed, loser_seed, score
FROM url('https://raw.githubusercontent.com/JeffSackmann/tennis_atp/refs/heads/master/atp_matches_{1968..1969}.csv')
SETTINGS schema_inference_make_columns_nullable=0,
schema_inference_hints='winner_seed Nullable(String), loser_seed Nullable(UInt8)';
CREATE OR REPLACE TABLE atp_matches_1970s ORDER BY tourney_id AS
SELECT tourney_id, surface, winner_name, loser_name, winner_seed, loser_seed, splitByWhitespace(score) AS score
FROM url('https://raw.githubusercontent.com/JeffSackmann/tennis_atp/refs/heads/master/atp_matches_{1970..1979}.csv')
SETTINGS schema_inference_make_columns_nullable=0,
schema_inference_hints='winner_seed Nullable(UInt8), loser_seed Nullable(UInt8)';
CREATE OR REPLACE TABLE atp_matches_1980s ORDER BY tourney_id AS
SELECT tourney_id, surface, winner_name, loser_name, winner_seed, loser_seed, splitByWhitespace(score) AS score
FROM url('https://raw.githubusercontent.com/JeffSackmann/tennis_atp/refs/heads/master/atp_matches_{1980..1989}.csv')
SETTINGS schema_inference_make_columns_nullable=0,
schema_inference_hints='winner_seed Nullable(UInt16), loser_seed Nullable(UInt16)';
CREATE OR REPLACE TABLE atp_matches_1990s ORDER BY tourney_id AS
SELECT tourney_id, surface, winner_name, loser_name, winner_seed, loser_seed, splitByWhitespace(score) AS score,
toBool(arrayExists(x -> position(x, 'W/O') > 0, score))::Nullable(bool) AS walkover,
toBool(arrayExists(x -> position(x, 'RET') > 0, score))::Nullable(bool) AS retirement
FROM url('https://raw.githubusercontent.com/JeffSackmann/tennis_atp/refs/heads/master/atp_matches_{1990..1999}.csv')
SETTINGS schema_inference_make_columns_nullable=0,
schema_inference_hints='winner_seed Nullable(UInt16), loser_seed Nullable(UInt16), surface Enum(\'Hard\', \'Grass\', \'Clay\', \'Carpet\')';
```
Schema of multiple tables {#schema-multiple-tables} | {"source_file": "merge-table-function.md"} | [
-0.008229264989495277,
0.002081054262816906,
-0.012078063562512398,
-0.01778317429125309,
-0.02024788409471512,
0.022854100912809372,
-0.027949213981628418,
0.022948535159230232,
-0.04521460458636284,
0.06478526443243027,
0.06030622497200966,
-0.06583873927593231,
0.06135433539748192,
-0.0... |
23335893-2b14-4e20-8bdc-5e62763b9616 | Schema of multiple tables {#schema-multiple-tables}
We can run the following query to list the columns in each table along with their types side by side, so that it's easier to see the differences.
sql
SELECT * EXCEPT(position) FROM (
SELECT position, name,
any(if(table = 'atp_matches_1960s', type, null)) AS 1960s,
any(if(table = 'atp_matches_1970s', type, null)) AS 1970s,
any(if(table = 'atp_matches_1980s', type, null)) AS 1980s,
any(if(table = 'atp_matches_1990s', type, null)) AS 1990s
FROM system.columns
WHERE database = currentDatabase() AND table LIKE 'atp_matches%'
GROUP BY ALL
ORDER BY position ASC
)
SETTINGS output_format_pretty_max_value_width=25;
text
ββnameβββββββββ¬β1960sβββββββββββββ¬β1970sββββββββββββ¬β1980sβββββββββββββ¬β1990sββββββββββββββββββββββ
β tourney_id β String β String β String β String β
β surface β String β String β String β Enum8('Hard' = 1, 'Grass'β―β
β winner_name β String β String β String β String β
β loser_name β String β String β String β String β
β winner_seed β Nullable(String) β Nullable(UInt8) β Nullable(UInt16) β Nullable(UInt16) β
β loser_seed β Nullable(UInt8) β Nullable(UInt8) β Nullable(UInt16) β Nullable(UInt16) β
β score β String β Array(String) β Array(String) β Array(String) β
β walkover β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β Nullable(Bool) β
β retirement β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β Nullable(Bool) β
βββββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββββββββββββ
Let's go through the differences:
1970s changes the type of
winner_seed
from
Nullable(String)
to
Nullable(UInt8)
and
score
from
String
to
Array(String)
.
1980s changes
winner_seed
and
loser_seed
from
Nullable(UInt8)
to
Nullable(UInt16)
.
1990s changes
surface
from
String
to
Enum('Hard', 'Grass', 'Clay', 'Carpet')
and adds the
walkover
and
retirement
columns.
Querying multiple tables with merge {#querying-multiple-tables}
Let's write a query to find the matches that John McEnroe won against someone who was seeded #1:
sql
SELECT loser_name, score
FROM merge('atp_matches*')
WHERE winner_name = 'John McEnroe'
AND loser_seed = 1; | {"source_file": "merge-table-function.md"} | [
0.09422039240598679,
0.019985396414995193,
0.026187898591160774,
-0.00909513235092163,
0.03910880908370018,
0.056539636105298996,
-0.007032863795757294,
0.017560094594955444,
-0.10513465851545334,
0.04472033306956291,
-0.00015324870764743537,
-0.04829968512058258,
0.03716357424855232,
0.01... |
881ff44a-7a18-4c0a-8848-7d1b7d7dbe0c | sql
SELECT loser_name, score
FROM merge('atp_matches*')
WHERE winner_name = 'John McEnroe'
AND loser_seed = 1;
text
ββloser_nameβββββ¬βscoreββββββββββββββββββββββββββββ
β Bjorn Borg β ['6-3','6-4'] β
β Bjorn Borg β ['7-6','6-1','6-7','5-7','6-4'] β
β Bjorn Borg β ['7-6','6-4'] β
β Bjorn Borg β ['4-6','7-6','7-6','6-4'] β
β Jimmy Connors β ['6-1','6-3'] β
β Ivan Lendl β ['6-2','4-6','6-3','6-7','7-6'] β
β Ivan Lendl β ['6-3','3-6','6-3','7-6'] β
β Ivan Lendl β ['6-1','6-3'] β
β Stefan Edberg β ['6-2','6-3'] β
β Stefan Edberg β ['7-6','6-2'] β
β Stefan Edberg β ['6-2','6-2'] β
β Jakob Hlasek β ['6-3','7-6'] β
βββββββββββββββββ΄ββββββββββββββββββββββββββββββββββ
Next, let's say we want to filter those matches to find the ones where McEnroe was seeded #3 or lower.
This is a bit trickier because
winner_seed
uses different types across the various tables:
sql
SELECT loser_name, score, winner_seed
FROM merge('atp_matches*')
WHERE winner_name = 'John McEnroe'
AND loser_seed = 1
AND multiIf(
variantType(winner_seed) = 'UInt8', variantElement(winner_seed, 'UInt8') >= 3,
variantType(winner_seed) = 'UInt16', variantElement(winner_seed, 'UInt16') >= 3,
variantElement(winner_seed, 'String')::UInt16 >= 3
);
We use the
variantType
function to check the type of
winner_seed
for each row and then
variantElement
to extract the underlying value.
When the type is
String
, we cast to a number and then do the comparison.
The result of running the query is shown below:
text
ββloser_nameβββββ¬βscoreββββββββββ¬βwinner_seedββ
β Bjorn Borg β ['6-3','6-4'] β 3 β
β Stefan Edberg β ['6-2','6-3'] β 6 β
β Stefan Edberg β ['7-6','6-2'] β 4 β
β Stefan Edberg β ['6-2','6-2'] β 7 β
βββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββ
Which table do rows come from when using merge? {#which-table-merge}
What if we want to know which table rows come from?
We can use the
_table
virtual column to do this, as shown in the following query:
sql
SELECT _table, loser_name, score, winner_seed
FROM merge('atp_matches*')
WHERE winner_name = 'John McEnroe'
AND loser_seed = 1
AND multiIf(
variantType(winner_seed) = 'UInt8', variantElement(winner_seed, 'UInt8') >= 3,
variantType(winner_seed) = 'UInt16', variantElement(winner_seed, 'UInt16') >= 3,
variantElement(winner_seed, 'String')::UInt16 >= 3
);
text
ββ_tableβββββββββββββ¬βloser_nameβββββ¬βscoreββββββββββ¬βwinner_seedββ
β atp_matches_1970s β Bjorn Borg β ['6-3','6-4'] β 3 β
β atp_matches_1980s β Stefan Edberg β ['6-2','6-3'] β 6 β
β atp_matches_1980s β Stefan Edberg β ['7-6','6-2'] β 4 β
β atp_matches_1980s β Stefan Edberg β ['6-2','6-2'] β 7 β
βββββββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββ | {"source_file": "merge-table-function.md"} | [
-0.006644821725785732,
0.00776333874091506,
0.015569214709103107,
0.016676029190421104,
-0.02692226506769657,
0.04625977203249931,
0.05084550380706787,
0.06154782697558403,
0.02420172467827797,
0.035672735422849655,
-0.027284612879157066,
-0.025040846318006516,
0.02172044664621353,
-0.0148... |
6f789f8e-9605-42c8-b555-6a355203898a | We could also use this virtual column as part of a query to count the values for the
walkover
column:
sql
SELECT _table, walkover, count()
FROM merge('atp_matches*')
GROUP BY ALL
ORDER BY _table;
text
ββ_tableβββββββββββββ¬βwalkoverββ¬βcount()ββ
β atp_matches_1960s β α΄Ία΅α΄Έα΄Έ β 7542 β
β atp_matches_1970s β α΄Ία΅α΄Έα΄Έ β 39165 β
β atp_matches_1980s β α΄Ία΅α΄Έα΄Έ β 36233 β
β atp_matches_1990s β true β 128 β
β atp_matches_1990s β false β 37022 β
βββββββββββββββββββββ΄βββββββββββ΄ββββββββββ
We can see that the
walkover
column is
NULL
for everything except
atp_matches_1990s
.
We'll need to update our query to check whether the
score
column contains the string
W/O
if the
walkover
column is
NULL
:
sql
SELECT _table,
multiIf(
walkover IS NOT NULL,
walkover,
variantType(score) = 'Array(String)',
toBool(arrayExists(
x -> position(x, 'W/O') > 0,
variantElement(score, 'Array(String)')
)),
variantElement(score, 'String') LIKE '%W/O%'
),
count()
FROM merge('atp_matches*')
GROUP BY ALL
ORDER BY _table;
If the underlying type of
score
is
Array(String)
we have to go over the array and look for
W/O
, whereas if it has a type of
String
we can just search for
W/O
in the string.
text
ββ_tableβββββββββββββ¬βmultiIf(isNoβ―, '%W/O%'))ββ¬βcount()ββ
β atp_matches_1960s β true β 242 β
β atp_matches_1960s β false β 7300 β
β atp_matches_1970s β true β 422 β
β atp_matches_1970s β false β 38743 β
β atp_matches_1980s β true β 92 β
β atp_matches_1980s β false β 36141 β
β atp_matches_1990s β true β 128 β
β atp_matches_1990s β false β 37022 β
βββββββββββββββββββββ΄βββββββββββββββββββββββββββ΄ββββββββββ | {"source_file": "merge-table-function.md"} | [
0.08888175338506699,
0.0017870457377284765,
0.018292302265763283,
-0.013337800279259682,
-0.031293388456106186,
0.13235488533973694,
0.0012914767721667886,
0.03925654664635658,
0.0011952916393056512,
0.00077183882240206,
0.03454083204269409,
-0.08755030483007431,
0.034968301653862,
-0.0342... |
edcc0f97-7b89-47c5-b030-e9c95ef7e481 | slug: /guides/developer/understanding-query-execution-with-the-analyzer
sidebar_label: 'Understanding query execution with the analyzer'
title: 'Understanding Query Execution with the Analyzer'
description: 'Describes how you can use the analyzer to understand how ClickHouse executes your queries'
doc_type: 'guide'
keywords: ['query execution', 'analyzer', 'query optimization', 'explain', 'performance']
import analyzer1 from '@site/static/images/guides/developer/analyzer1.png';
import analyzer2 from '@site/static/images/guides/developer/analyzer2.png';
import analyzer3 from '@site/static/images/guides/developer/analyzer3.png';
import analyzer4 from '@site/static/images/guides/developer/analyzer4.png';
import analyzer5 from '@site/static/images/guides/developer/analyzer5.png';
import Image from '@theme/IdealImage';
Understanding query execution with the analyzer
ClickHouse processes queries extremely quickly, but the execution of a query is not a simple story. Let's try to understand how a
SELECT
query gets executed. To illustrate it, let's add some data in a table in ClickHouse:
```sql
CREATE TABLE session_events(
clientId UUID,
sessionId UUID,
pageId UUID,
timestamp DateTime,
type String
) ORDER BY (timestamp);
INSERT INTO session_events SELECT * FROM generateRandom('clientId UUID,
sessionId UUID,
pageId UUID,
timestamp DateTime,
type Enum(\'type1\', \'type2\')', 1, 10, 2) LIMIT 1000;
```
Now that we have some data in ClickHouse, we want to run some queries and understand their execution. The execution of a query is decomposed into many steps. Each step of the query execution can be analyzed and troubleshooted using the corresponding
EXPLAIN
query. These steps are summarized in the chart below:
Let's look at each entity in action during query execution. We are going to take a few queries and then examine them using the
EXPLAIN
statement.
Parser {#parser}
The goal of a parser is to transform the query text into an AST (Abstract Syntax Tree). This step can be visualized using
EXPLAIN AST
:
```sql
EXPLAIN AST SELECT min(timestamp), max(timestamp) FROM session_events; | {"source_file": "understanding-query-execution-with-the-analyzer.md"} | [
-0.009532653726637363,
0.04263963922858238,
-0.05766606330871582,
0.041962966322898865,
0.05204975977540016,
-0.11144597828388214,
0.012528144754469395,
0.0065943216904997826,
-0.03367120772600174,
-0.007056740578263998,
0.012441691011190414,
0.0012241463409736753,
0.07836990058422089,
0.0... |
b4bcea51-7cf3-4ce6-9d8f-393be0fa08ec | ```sql
EXPLAIN AST SELECT min(timestamp), max(timestamp) FROM session_events;
ββexplainβββββββββββββββββββββββββββββββββββββββββββββ
β SelectWithUnionQuery (children 1) β
β ExpressionList (children 1) β
β SelectQuery (children 2) β
β ExpressionList (children 2) β
β Function min (alias minimum_date) (children 1) β
β ExpressionList (children 1) β
β Identifier timestamp β
β Function max (alias maximum_date) (children 1) β
β ExpressionList (children 1) β
β Identifier timestamp β
β TablesInSelectQuery (children 1) β
β TablesInSelectQueryElement (children 1) β
β TableExpression (children 1) β
β TableIdentifier session_events β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
The output is an Abstract Syntax Tree that can be visualized as shown below:
Each node has corresponding children and the overall tree represents the overall structure of your query. This is a logical structure to help processing a query. From an end-user standpoint (unless interested in query execution), it is not super useful; this tool is mainly used by developers.
Analyzer {#analyzer}
ClickHouse currently has two architectures for the Analyzer. You can use the old architecture by setting:
enable_analyzer=0
. The new architecture is enabled by default. We are going to describe only the new architecture here, given the old one is going to be deprecated once the new analyzer is generally available.
:::note
The new architecture should provide us with a better framework to improve ClickHouse's performance. However, given it is a fundamental component of the query processing steps, it also might have a negative impact on some queries and there are
known incompatibilities
. You can revert back to the old analyzer by changing the
enable_analyzer
setting at the query or user level.
:::
The analyzer is an important step of the query execution. It takes an AST and transforms it into a query tree. The main benefit of a query tree over an AST is that a lot of the components will be resolved, like the storage for instance. We also know from which table to read, aliases are also resolved, and the tree knows the different data types used. With all these benefits, the analyzer can apply optimizations. The way these optimizations work is via "passes". Every pass is going to look for different optimizations. You can see all the passes
here
, let's see it in practice with our previous query:
```sql
EXPLAIN QUERY TREE passes=0 SELECT min(timestamp) AS minimum_date, max(timestamp) AS maximum_date FROM session_events SETTINGS allow_experimental_analyzer=1; | {"source_file": "understanding-query-execution-with-the-analyzer.md"} | [
-0.04888323321938515,
0.058438438922166824,
0.003680863883346319,
0.0710677057504654,
0.010084566660225391,
-0.018108071759343147,
0.07703717797994614,
0.09274700284004211,
0.049165673553943634,
0.05276006832718849,
-0.03464123606681824,
-0.05194380506873131,
0.013606728054583073,
-0.00954... |
80f3f40b-19d1-4418-b655-f1f5097456a3 | ```sql
EXPLAIN QUERY TREE passes=0 SELECT min(timestamp) AS minimum_date, max(timestamp) AS maximum_date FROM session_events SETTINGS allow_experimental_analyzer=1;
ββexplainβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β QUERY id: 0 β
β PROJECTION β
β LIST id: 1, nodes: 2 β
β FUNCTION id: 2, alias: minimum_date, function_name: min, function_type: ordinary β
β ARGUMENTS β
β LIST id: 3, nodes: 1 β
β IDENTIFIER id: 4, identifier: timestamp β
β FUNCTION id: 5, alias: maximum_date, function_name: max, function_type: ordinary β
β ARGUMENTS β
β LIST id: 6, nodes: 1 β
β IDENTIFIER id: 7, identifier: timestamp β
β JOIN TREE β
β IDENTIFIER id: 8, identifier: session_events β
β SETTINGS allow_experimental_analyzer=1 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
```sql
EXPLAIN QUERY TREE passes=20 SELECT min(timestamp) AS minimum_date, max(timestamp) AS maximum_date FROM session_events SETTINGS allow_experimental_analyzer=1; | {"source_file": "understanding-query-execution-with-the-analyzer.md"} | [
0.03263073414564133,
0.04791989177465439,
0.011613833718001842,
0.04719625040888786,
-0.014636451378464699,
-0.05912734195590019,
0.019763292744755745,
0.08114239573478699,
-0.02075061947107315,
0.07914123684167862,
-0.04633687809109688,
-0.07935410737991333,
0.037326715886592865,
0.008166... |
a1283d5c-2eb6-4cd4-a3a8-92c6c96d57f0 | ```sql
EXPLAIN QUERY TREE passes=20 SELECT min(timestamp) AS minimum_date, max(timestamp) AS maximum_date FROM session_events SETTINGS allow_experimental_analyzer=1;
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β QUERY id: 0 β
β PROJECTION COLUMNS β
β minimum_date DateTime β
β maximum_date DateTime β
β PROJECTION β
β LIST id: 1, nodes: 2 β
β FUNCTION id: 2, function_name: min, function_type: aggregate, result_type: DateTime β
β ARGUMENTS β
β LIST id: 3, nodes: 1 β
β COLUMN id: 4, column_name: timestamp, result_type: DateTime, source_id: 5 β
β FUNCTION id: 6, function_name: max, function_type: aggregate, result_type: DateTime β
β ARGUMENTS β
β LIST id: 7, nodes: 1 β
β COLUMN id: 4, column_name: timestamp, result_type: DateTime, source_id: 5 β
β JOIN TREE β
β TABLE id: 5, alias: __table1, table_name: default.session_events β
β SETTINGS allow_experimental_analyzer=1 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
Between the two executions, you can see the resolution of aliases and projections.
Planner {#planner}
The planner takes a query tree and builds a query plan out of it. The query tree tells us what we want to do with a specific query, and the query plan tells us how we will do it. Additional optimizations are going to be done as part of the query plan. You can use
EXPLAIN PLAN
or
EXPLAIN
to see the query plan (
EXPLAIN
will execute
EXPLAIN PLAN
).
```sql
EXPLAIN PLAN WITH
(
SELECT count(
)
FROM session_events
) AS total_rows
SELECT type, min(timestamp) AS minimum_date, max(timestamp) AS maximum_date, count(
) /total_rows * 100 AS percentage FROM session_events GROUP BY type
ββexplainβββββββββββββββββββββββββββββββββββββββββββ
β Expression ((Projection + Before ORDER BY)) β
β Aggregating β
β Expression (Before GROUP BY) β
β ReadFromMergeTree (default.session_events) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | {"source_file": "understanding-query-execution-with-the-analyzer.md"} | [
0.03934570401906967,
0.03973193094134331,
0.03322017565369606,
0.08032799512147903,
0.0057231648825109005,
-0.062173403799533844,
0.009913465939462185,
0.05555342510342598,
0.0006058558938093483,
0.08890178054571152,
-0.02864518202841282,
-0.0864831954240799,
0.03437207266688347,
-0.033323... |
227c274f-897d-4d86-a82c-36f0f59e7d81 | Even though this is giving us some information, we can get more. For example, maybe we want to know the column's name on top of which we need the projections. You can add the header to the query:
```SQL
EXPLAIN header = 1
WITH (
SELECT count(
)
FROM session_events
) AS total_rows
SELECT
type,
min(timestamp) AS minimum_date,
max(timestamp) AS maximum_date,
(count(
) / total_rows) * 100 AS percentage
FROM session_events
GROUP BY type
ββexplainβββββββββββββββββββββββββββββββββββββββββββ
β Expression ((Projection + Before ORDER BY)) β
β Header: type String β
β minimum_date DateTime β
β maximum_date DateTime β
β percentage Nullable(Float64) β
β Aggregating β
β Header: type String β
β min(timestamp) DateTime β
β max(timestamp) DateTime β
β count() UInt64 β
β Expression (Before GROUP BY) β
β Header: timestamp DateTime β
β type String β
β ReadFromMergeTree (default.session_events) β
β Header: timestamp DateTime β
β type String β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
So now you know the column names that need to be created for the last Projection (
minimum_date
,
maximum_date
and
percentage
), but you might also want to have the details of all the actions that need to be executed. You can do so by setting
actions=1
.
```sql
EXPLAIN actions = 1
WITH (
SELECT count(
)
FROM session_events
) AS total_rows
SELECT
type,
min(timestamp) AS minimum_date,
max(timestamp) AS maximum_date,
(count(
) / total_rows) * 100 AS percentage
FROM session_events
GROUP BY type | {"source_file": "understanding-query-execution-with-the-analyzer.md"} | [
0.041145455092191696,
0.03625497967004776,
-0.05224035307765007,
0.06935629993677139,
-0.02994168922305107,
0.0031362809240818024,
0.08022468537092209,
0.05574652552604675,
-0.00017083578859455884,
0.0945664793252945,
-0.028526591137051582,
-0.11887921392917633,
-0.006682052742689848,
0.00... |
3b2339a9-4ae4-4d07-a7dd-e66c002c679b | ββexplainβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Expression ((Projection + Before ORDER BY)) β
β Actions: INPUT :: 0 -> type String : 0 β
β INPUT : 1 -> min(timestamp) DateTime : 1 β
β INPUT : 2 -> max(timestamp) DateTime : 2 β
β INPUT : 3 -> count() UInt64 : 3 β
β COLUMN Const(Nullable(UInt64)) -> total_rows Nullable(UInt64) : 4 β
β COLUMN Const(UInt8) -> 100 UInt8 : 5 β
β ALIAS min(timestamp) :: 1 -> minimum_date DateTime : 6 β
β ALIAS max(timestamp) :: 2 -> maximum_date DateTime : 1 β
β FUNCTION divide(count() :: 3, total_rows :: 4) -> divide(count(), total_rows) Nullable(Float64) : 2 β
β FUNCTION multiply(divide(count(), total_rows) :: 2, 100 :: 5) -> multiply(divide(count(), total_rows), 100) Nullable(Float64) : 4 β
β ALIAS multiply(divide(count(), total_rows), 100) :: 4 -> percentage Nullable(Float64) : 5 β
β Positions: 0 6 1 5 β
β Aggregating β
β Keys: type β
β Aggregates: β
β min(timestamp) β
β Function: min(DateTime) β DateTime β
β Arguments: timestamp β | {"source_file": "understanding-query-execution-with-the-analyzer.md"} | [
-0.04661060497164726,
0.024151695892214775,
-0.023066340014338493,
-0.036344416439533234,
-0.041733432561159134,
0.019292881712317467,
-0.03929596021771431,
0.010064985603094101,
-0.027851013466715813,
0.08678962290287018,
0.04132179915904999,
-0.027611244469881058,
0.028966857120394707,
0... |
c00ed2bd-92d1-4c77-b6c5-07449e27eed9 | β Arguments: timestamp β
β max(timestamp) β
β Function: max(DateTime) β DateTime β
β Arguments: timestamp β
β count() β
β Function: count() β UInt64 β
β Arguments: none β
β Skip merging: 0 β
β Expression (Before GROUP BY) β
β Actions: INPUT :: 0 -> timestamp DateTime : 0 β
β INPUT :: 1 -> type String : 1 β
β Positions: 0 1 β
β ReadFromMergeTree (default.session_events) β
β ReadType: Default β
β Parts: 1 β
β Granules: 1 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | {"source_file": "understanding-query-execution-with-the-analyzer.md"} | [
0.02447683736681938,
0.0220627598464489,
-0.011976934038102627,
0.016957968473434448,
-0.0460345558822155,
0.03434940427541733,
0.0652879923582077,
0.02390519715845585,
0.040390655398368835,
0.02204812318086624,
0.049782514572143555,
-0.033103637397289276,
-0.0046623460948467255,
0.0354830... |
f4b3b213-5ac6-44c3-8107-df6ddd638aa2 | You can now see all the inputs, functions, aliases, and data types that are being used. You can see some of the optimizations that the planner is going to apply
here
.
Query pipeline {#query-pipeline}
A query pipeline is generated from the query plan. The query pipeline is very similar to the query plan, with the difference that it's not a tree but a graph. It highlights how ClickHouse is going to execute a query and what resources are going to be used. Analyzing the query pipeline is very useful to see where the bottleneck is in terms of inputs/outputs. Let's take our previous query and look at the query pipeline execution:
```sql
EXPLAIN PIPELINE
WITH (
SELECT count(
)
FROM session_events
) AS total_rows
SELECT
type,
min(timestamp) AS minimum_date,
max(timestamp) AS maximum_date,
(count(
) / total_rows) * 100 AS percentage
FROM session_events
GROUP BY type;
ββexplainβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β (Expression) β
β ExpressionTransform Γ 2 β
β (Aggregating) β
β Resize 1 β 2 β
β AggregatingTransform β
β (Expression) β
β ExpressionTransform β
β (ReadFromMergeTree) β
β MergeTreeSelect(pool: PrefetchedReadPool, algorithm: Thread) 0 β 1 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
Inside the parenthesis is the query plan step, and next to it the processor. This is great information, but given this is a graph, it would be nice to visualize it as such. We have a setting
graph
we can set to 1 and specify the output format to be TSV:
sql
EXPLAIN PIPELINE graph=1 WITH
(
SELECT count(*)
FROM session_events
) AS total_rows
SELECT type, min(timestamp) AS minimum_date, max(timestamp) AS maximum_date, count(*) /total_rows * 100 AS percentage FROM session_events GROUP BY type FORMAT TSV; | {"source_file": "understanding-query-execution-with-the-analyzer.md"} | [
-0.008101974613964558,
0.004822948481887579,
0.020076675340533257,
0.06062491610646248,
-0.040694091469049454,
-0.0730297714471817,
-0.037550557404756546,
0.032943155616521835,
0.02673879824578762,
0.034194767475128174,
-0.11969088762998581,
-0.024195507168769836,
-0.056476421654224396,
-0... |
8f56cc3c-f0ff-48ae-81b8-bba9cd6f3bc8 | response
digraph
{
rankdir="LR";
{ node [shape = rect]
subgraph cluster_0 {
label ="Expression";
style=filled;
color=lightgrey;
node [style=filled,color=white];
{ rank = same;
n5 [label="ExpressionTransform Γ 2"];
}
}
subgraph cluster_1 {
label ="Aggregating";
style=filled;
color=lightgrey;
node [style=filled,color=white];
{ rank = same;
n3 [label="AggregatingTransform"];
n4 [label="Resize"];
}
}
subgraph cluster_2 {
label ="Expression";
style=filled;
color=lightgrey;
node [style=filled,color=white];
{ rank = same;
n2 [label="ExpressionTransform"];
}
}
subgraph cluster_3 {
label ="ReadFromMergeTree";
style=filled;
color=lightgrey;
node [style=filled,color=white];
{ rank = same;
n1 [label="MergeTreeSelect(pool: PrefetchedReadPool, algorithm: Thread)"];
}
}
}
n3 -> n4 [label=""];
n4 -> n5 [label="Γ 2"];
n2 -> n3 [label=""];
n1 -> n2 [label=""];
}
You can then copy this output and paste it
here
and that will generate the following graph:
A white rectangle corresponds to a pipeline node, the gray rectangle corresponds to the query plan steps, and the
x
followed by a number corresponds to the number of inputs/outputs that are being used. If you do not want to see them in a compact form, you can always add
compact=0
:
sql
EXPLAIN PIPELINE graph = 1, compact = 0
WITH (
SELECT count(*)
FROM session_events
) AS total_rows
SELECT
type,
min(timestamp) AS minimum_date,
max(timestamp) AS maximum_date,
(count(*) / total_rows) * 100 AS percentage
FROM session_events
GROUP BY type
FORMAT TSV
response
digraph
{
rankdir="LR";
{ node [shape = rect]
n0[label="MergeTreeSelect(pool: PrefetchedReadPool, algorithm: Thread)"];
n1[label="ExpressionTransform"];
n2[label="AggregatingTransform"];
n3[label="Resize"];
n4[label="ExpressionTransform"];
n5[label="ExpressionTransform"];
}
n0 -> n1;
n1 -> n2;
n2 -> n3;
n3 -> n4;
n3 -> n5;
}
Why does ClickHouse not read from the table using multiple threads? Let's try to add more data to our table:
sql
INSERT INTO session_events SELECT * FROM generateRandom('clientId UUID,
sessionId UUID,
pageId UUID,
timestamp DateTime,
type Enum(\'type1\', \'type2\')', 1, 10, 2) LIMIT 1000000;
Now let's run our
EXPLAIN
query again:
sql
EXPLAIN PIPELINE graph = 1, compact = 0
WITH (
SELECT count(*)
FROM session_events
) AS total_rows
SELECT
type,
min(timestamp) AS minimum_date,
max(timestamp) AS maximum_date,
(count(*) / total_rows) * 100 AS percentage
FROM session_events
GROUP BY type
FORMAT TSV | {"source_file": "understanding-query-execution-with-the-analyzer.md"} | [
-0.02784847840666771,
-0.0057814461179077625,
0.022559380158782005,
0.023694230243563652,
-0.015628207474946976,
0.03608055040240288,
-0.03833100572228432,
0.006554304156452417,
-0.057869430631399155,
-0.04466334357857704,
-0.03610633686184883,
-0.07609924674034119,
0.013174710795283318,
-... |
14c6b6bd-c94e-40f1-8c80-49e54d67b9e1 | response
digraph
{
rankdir="LR";
{ node [shape = rect]
n0[label="MergeTreeSelect(pool: PrefetchedReadPool, algorithm: Thread)"];
n1[label="MergeTreeSelect(pool: PrefetchedReadPool, algorithm: Thread)"];
n2[label="ExpressionTransform"];
n3[label="ExpressionTransform"];
n4[label="StrictResize"];
n5[label="AggregatingTransform"];
n6[label="AggregatingTransform"];
n7[label="Resize"];
n8[label="ExpressionTransform"];
n9[label="ExpressionTransform"];
}
n0 -> n2;
n1 -> n3;
n2 -> n4;
n3 -> n4;
n4 -> n5;
n4 -> n6;
n5 -> n7;
n6 -> n7;
n7 -> n8;
n7 -> n9;
}
So the executor decided not to parallelize operations because the volume of data was not high enough. By adding more rows, the executor then decided to use multiple threads as shown in the graph.
Executor {#executor}
Finally the last step of the query execution is done by the executor. It will take the query pipeline and execute it. There are different types of executors, depending if you are doing a
SELECT
, an
INSERT
, or an
INSERT SELECT
. | {"source_file": "understanding-query-execution-with-the-analyzer.md"} | [
-0.04104876518249512,
-0.005507919937372208,
0.0018453469965606928,
-0.017593665048480034,
-0.0905817523598671,
0.019545039162039757,
-0.0514754056930542,
0.05410536006093025,
-0.07145056873559952,
-0.035207368433475494,
-0.04027164727449417,
-0.037271250039339066,
-0.029127588495612144,
-... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.