id
stringlengths
36
36
document
stringlengths
3
3k
metadata
stringlengths
23
69
embeddings
listlengths
384
384
a6f00e4c-f138-4411-b29e-51a11337f354
description: 'Analyzing Stack Overflow data with ClickHouse' sidebar_label: 'Stack Overflow' slug: /getting-started/example-datasets/stackoverflow title: 'Analyzing Stack Overflow data with ClickHouse' keywords: ['StackOverflow'] show_related_blogs: true doc_type: 'guide' import Image from '@theme/IdealImage'; import stackoverflow from '@site/static/images/getting-started/example-datasets/stackoverflow.png' This dataset contains every Posts , Users , Votes , Comments , Badges , PostHistory , and PostLinks that has occurred on Stack Overflow. Users can either download pre-prepared Parquet versions of the data, containing every post up to April 2024, or download the latest data in XML format and load this. Stack Overflow provide updates to this data periodically - historically every 3 months. The following diagram shows the schema for the available tables assuming Parquet format. A description of the schema of this data can be found here . Pre-prepared data {#pre-prepared-data} We provide a copy of this data in Parquet format, up to date as of April 2024. While small for ClickHouse with respect to the number of rows (60 million posts), this dataset contains significant volumes of text and large String columns. sql CREATE DATABASE stackoverflow The following timings are for a 96 GiB, 24 vCPU ClickHouse Cloud cluster located in eu-west-2 . The dataset is located in eu-west-3 . Posts {#posts} ``sql CREATE TABLE stackoverflow.posts ( Id Int32 CODEC(Delta(4), ZSTD(1)), PostTypeId Enum8('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8), AcceptedAnswerId UInt32, CreationDate DateTime64(3, 'UTC'), Score Int32, ViewCount UInt32 CODEC(Delta(4), ZSTD(1)), Body String, OwnerUserId Int32, OwnerDisplayName String, LastEditorUserId Int32, LastEditorDisplayName String, LastEditDate DateTime64(3, 'UTC') CODEC(Delta(8), ZSTD(1)), LastActivityDate DateTime64(3, 'UTC'), Title String, Tags String, AnswerCount UInt16 CODEC(Delta(2), ZSTD(1)), CommentCount UInt8, FavoriteCount UInt8, ContentLicense LowCardinality(String), ParentId String, CommunityOwnedDate DateTime64(3, 'UTC'), ClosedDate` DateTime64(3, 'UTC') ) ENGINE = MergeTree PARTITION BY toYear(CreationDate) ORDER BY (PostTypeId, toDate(CreationDate), CreationDate) INSERT INTO stackoverflow.posts SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/*.parquet') 0 rows in set. Elapsed: 265.466 sec. Processed 59.82 million rows, 38.07 GB (225.34 thousand rows/s., 143.42 MB/s.) ``` Posts are also available by year e.g. https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet Votes {#votes}
{"source_file": "stackoverflow.md"}
[ -0.0555989146232605, -0.07065937668085098, -0.04877786338329315, -0.02111412025988102, 0.005500933155417442, -0.06630989909172058, -0.040108732879161835, -0.008842880837619305, -0.07634390145540237, 0.02451350912451744, 0.046524371951818466, 0.02764245681464672, 0.0408359058201313, -0.0656...
ea839310-216d-49a3-b4ed-d768653a38fb
Posts are also available by year e.g. https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet Votes {#votes} ``sql CREATE TABLE stackoverflow.votes ( Id UInt32, PostId Int32, VoteTypeId UInt8, CreationDate DateTime64(3, 'UTC'), UserId Int32, BountyAmount` UInt8 ) ENGINE = MergeTree ORDER BY (VoteTypeId, CreationDate, PostId, UserId) INSERT INTO stackoverflow.votes SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/votes/*.parquet') 0 rows in set. Elapsed: 21.605 sec. Processed 238.98 million rows, 2.13 GB (11.06 million rows/s., 98.46 MB/s.) ``` Votes are also available by year e.g. https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet Comments {#comments} ``sql CREATE TABLE stackoverflow.comments ( Id UInt32, PostId UInt32, Score UInt16, Text String, CreationDate DateTime64(3, 'UTC'), UserId Int32, UserDisplayName` LowCardinality(String) ) ENGINE = MergeTree ORDER BY CreationDate INSERT INTO stackoverflow.comments SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/comments/*.parquet') 0 rows in set. Elapsed: 56.593 sec. Processed 90.38 million rows, 11.14 GB (1.60 million rows/s., 196.78 MB/s.) ``` Comments are also available by year e.g. https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet Users {#users} ``sql CREATE TABLE stackoverflow.users ( Id Int32, Reputation LowCardinality(String), CreationDate DateTime64(3, 'UTC') CODEC(Delta(8), ZSTD(1)), DisplayName String, LastAccessDate DateTime64(3, 'UTC'), AboutMe String, Views UInt32, UpVotes UInt32, DownVotes UInt32, WebsiteUrl String, Location LowCardinality(String), AccountId` Int32 ) ENGINE = MergeTree ORDER BY (Id, CreationDate) INSERT INTO stackoverflow.users SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/users.parquet') 0 rows in set. Elapsed: 10.988 sec. Processed 22.48 million rows, 1.36 GB (2.05 million rows/s., 124.10 MB/s.) ``` Badges {#badges} ``sql CREATE TABLE stackoverflow.badges ( Id UInt32, UserId Int32, Name LowCardinality(String), Date DateTime64(3, 'UTC'), Class Enum8('Gold' = 1, 'Silver' = 2, 'Bronze' = 3), TagBased` Bool ) ENGINE = MergeTree ORDER BY UserId INSERT INTO stackoverflow.badges SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/badges.parquet') 0 rows in set. Elapsed: 6.635 sec. Processed 51.29 million rows, 797.05 MB (7.73 million rows/s., 120.13 MB/s.) ``` PostLinks {#postlinks} ``sql CREATE TABLE stackoverflow.postlinks ( Id UInt64, CreationDate DateTime64(3, 'UTC'), PostId Int32, RelatedPostId Int32, LinkTypeId` Enum8('Linked' = 1, 'Duplicate' = 3) ) ENGINE = MergeTree ORDER BY (PostId, RelatedPostId)
{"source_file": "stackoverflow.md"}
[ -0.007549068424850702, -0.05592484399676323, -0.06792504340410233, -0.04330500215291977, -0.031563159078359604, -0.004585457034409046, -0.08415670692920685, -0.04538687318563461, -0.012181833386421204, 0.05873578414320946, 0.04398267716169357, -0.03661414608359337, 0.04491127282381058, -0....
3561dd52-3697-45fc-a08e-5c3c33655244
INSERT INTO stackoverflow.postlinks SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/postlinks.parquet') 0 rows in set. Elapsed: 1.534 sec. Processed 6.55 million rows, 129.70 MB (4.27 million rows/s., 84.57 MB/s.) ``` PostHistory {#posthistory} ``sql CREATE TABLE stackoverflow.posthistory ( Id UInt64, PostHistoryTypeId UInt8, PostId Int32, RevisionGUID String, CreationDate DateTime64(3, 'UTC'), UserId Int32, Text String, ContentLicense LowCardinality(String), Comment String, UserDisplayName` String ) ENGINE = MergeTree ORDER BY (CreationDate, PostId) INSERT INTO stackoverflow.posthistory SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posthistory/*.parquet') 0 rows in set. Elapsed: 422.795 sec. Processed 160.79 million rows, 67.08 GB (380.30 thousand rows/s., 158.67 MB/s.) ``` Original dataset {#original-dataset} The original dataset is available in compressed (7zip) XML format at https://archive.org/download/stackexchange - files with prefix stackoverflow.com* . Download {#download} bash wget https://archive.org/download/stackexchange/stackoverflow.com-Badges.7z wget https://archive.org/download/stackexchange/stackoverflow.com-Comments.7z wget https://archive.org/download/stackexchange/stackoverflow.com-PostHistory.7z wget https://archive.org/download/stackexchange/stackoverflow.com-PostLinks.7z wget https://archive.org/download/stackexchange/stackoverflow.com-Posts.7z wget https://archive.org/download/stackexchange/stackoverflow.com-Users.7z wget https://archive.org/download/stackexchange/stackoverflow.com-Votes.7z These files are up to 35GB and can take around 30 mins to download depending on internet connection - the download server throttles at around 20MB/sec. Convert to JSON {#convert-to-json} At the time of writing, ClickHouse does not have native support for XML as an input format. To load the data into ClickHouse we first convert to NDJSON. To convert XML to JSON we recommend the xq linux tool, a simple jq wrapper for XML documents. Install xq and jq: bash sudo apt install jq pip install yq The following steps apply to any of the above files. We use the stackoverflow.com-Posts.7z file as an example. Modify as required. Extract the file using p7zip . This will produce a single xml file - in this case Posts.xml . Files are compressed approximately 4.5x. At 22GB compressed, the posts file requires around 97G uncompressed. bash p7zip -d stackoverflow.com-Posts.7z The following splits the xml file into files, each containing 10000 rows. ```bash mkdir posts cd posts the following splits the input xml file into sub files of 10000 rows tail +3 ../Posts.xml | head -n -1 | split -l 10000 --filter='{ printf " \n"; cat - ; printf " \n"; } > $FILE' - ```
{"source_file": "stackoverflow.md"}
[ 0.013304785825312138, -0.04538374021649361, -0.01839279755949974, -0.017733322456479073, -0.016518518328666687, -0.043860211968421936, -0.04967617243528366, -0.01027363259345293, 0.010674695484340191, 0.07163781672716141, 0.054438184946775436, -0.03165760263800621, 0.029764298349618912, -0...
1f00c910-5b73-45dc-baf1-31fa57d221d7
the following splits the input xml file into sub files of 10000 rows tail +3 ../Posts.xml | head -n -1 | split -l 10000 --filter='{ printf " \n"; cat - ; printf " \n"; } > $FILE' - ``` After running the above users will have a set of files, each with 10000 lines. This ensures the memory overhead of the next command is not excessive (xml to JSON conversion is done in memory). bash find . -maxdepth 1 -type f -exec xq -c '.rows.row[]' {} \; | sed -e 's:"@:":g' > posts_v2.json The above command will produce a single posts.json file. Load into ClickHouse with the following command. Note the schema is specified for the posts.json file. This will need to be adjusted per data type to align with the target table. bash clickhouse local --query "SELECT * FROM file('posts.json', JSONEachRow, 'Id Int32, PostTypeId UInt8, AcceptedAnswerId UInt32, CreationDate DateTime64(3, \'UTC\'), Score Int32, ViewCount UInt32, Body String, OwnerUserId Int32, OwnerDisplayName String, LastEditorUserId Int32, LastEditorDisplayName String, LastEditDate DateTime64(3, \'UTC\'), LastActivityDate DateTime64(3, \'UTC\'), Title String, Tags String, AnswerCount UInt16, CommentCount UInt8, FavoriteCount UInt8, ContentLicense String, ParentId String, CommunityOwnedDate DateTime64(3, \'UTC\'), ClosedDate DateTime64(3, \'UTC\')') FORMAT Native" | clickhouse client --host <host> --secure --password <password> --query "INSERT INTO stackoverflow.posts_v2 FORMAT Native" Example queries {#example-queries} A few simple questions to you get started. Most popular tags on Stack Overflow {#most-popular-tags-on-stack-overflow} ```sql SELECT arrayJoin(arrayFilter(t -> (t != ''), splitByChar('|', Tags))) AS Tags, count() AS c FROM stackoverflow.posts GROUP BY Tags ORDER BY c DESC LIMIT 10 ┌─Tags───────┬───────c─┐ │ javascript │ 2527130 │ │ python │ 2189638 │ │ java │ 1916156 │ │ c# │ 1614236 │ │ php │ 1463901 │ │ android │ 1416442 │ │ html │ 1186567 │ │ jquery │ 1034621 │ │ c++ │ 806202 │ │ css │ 803755 │ └────────────┴─────────┘ 10 rows in set. Elapsed: 1.013 sec. Processed 59.82 million rows, 1.21 GB (59.07 million rows/s., 1.19 GB/s.) Peak memory usage: 224.03 MiB. ``` User with the most answers (active accounts) {#user-with-the-most-answers-active-accounts} Account requires a UserId . ```sql SELECT any(OwnerUserId) UserId, OwnerDisplayName, count() AS c FROM stackoverflow.posts WHERE OwnerDisplayName != '' AND PostTypeId='Answer' AND OwnerUserId != 0 GROUP BY OwnerDisplayName ORDER BY c DESC LIMIT 5 ┌─UserId─┬─OwnerDisplayName─┬────c─┐ │ 22656 │ Jon Skeet │ 2727 │ │ 23354 │ Marc Gravell │ 2150 │ │ 12950 │ tvanfosson │ 1530 │ │ 3043 │ Joel Coehoorn │ 1438 │ │ 10661 │ S.Lott │ 1087 │ └────────┴──────────────────┴──────┘
{"source_file": "stackoverflow.md"}
[ -0.0044048079289495945, 0.04103937745094299, -0.03946451470255852, -0.02100895158946514, -0.025658229365944862, -0.042004749178886414, -0.0513017401099205, 0.03993614763021469, -0.08295471966266632, 0.030082356184720993, 0.07353321462869644, 0.01608315482735634, 0.04689130187034607, -0.069...
b5a917cf-800e-483b-9fe6-fb548e5a01f9
5 rows in set. Elapsed: 0.154 sec. Processed 35.83 million rows, 193.39 MB (232.33 million rows/s., 1.25 GB/s.) Peak memory usage: 206.45 MiB. ``` ClickHouse related posts with the most views {#clickhouse-related-posts-with-the-most-views} ```sql SELECT Id, Title, ViewCount, AnswerCount FROM stackoverflow.posts WHERE Title ILIKE '%ClickHouse%' ORDER BY ViewCount DESC LIMIT 10 ┌───────Id─┬─Title────────────────────────────────────────────────────────────────────────────┬─ViewCount─┬─AnswerCount─┐ │ 52355143 │ Is it possible to delete old records from clickhouse table? │ 41462 │ 3 │ │ 37954203 │ Clickhouse Data Import │ 38735 │ 3 │ │ 37901642 │ Updating data in Clickhouse │ 36236 │ 6 │ │ 58422110 │ Pandas: How to insert dataframe into Clickhouse │ 29731 │ 4 │ │ 63621318 │ DBeaver - Clickhouse - SQL Error [159] .. Read timed out │ 27350 │ 1 │ │ 47591813 │ How to filter clickhouse table by array column contents? │ 27078 │ 2 │ │ 58728436 │ How to search the string in query with case insensitive on Clickhouse database? │ 26567 │ 3 │ │ 65316905 │ Clickhouse: DB::Exception: Memory limit (for query) exceeded │ 24899 │ 2 │ │ 49944865 │ How to add a column in clickhouse │ 24424 │ 1 │ │ 59712399 │ How to cast date Strings to DateTime format with extended parsing in ClickHouse? │ 22620 │ 1 │ └──────────┴──────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────────┘ 10 rows in set. Elapsed: 0.472 sec. Processed 59.82 million rows, 1.91 GB (126.63 million rows/s., 4.03 GB/s.) Peak memory usage: 240.01 MiB. ``` Most controversial posts {#most-controversial-posts} ```sql SELECT Id, Title, UpVotes, DownVotes, abs(UpVotes - DownVotes) AS Controversial_ratio FROM stackoverflow.posts INNER JOIN ( SELECT PostId, countIf(VoteTypeId = 2) AS UpVotes, countIf(VoteTypeId = 3) AS DownVotes FROM stackoverflow.votes GROUP BY PostId HAVING (UpVotes > 10) AND (DownVotes > 10) ) AS votes ON posts.Id = votes.PostId WHERE Title != '' ORDER BY Controversial_ratio ASC LIMIT 3
{"source_file": "stackoverflow.md"}
[ 0.04296940937638283, -0.08368399739265442, -0.0486297532916069, 0.023171525448560715, 0.00006333375495159999, -0.0755278542637825, 0.06990724056959152, -0.11331229656934738, -0.06707562506198883, 0.05216437950730324, 0.00801677256822586, 0.04348859563469887, 0.02135983668267727, -0.0893739...
29f9e92d-6d0b-47c6-af94-867daa6e9a4e
┌───────Id─┬─Title─────────────────────────────────────────────┬─UpVotes─┬─DownVotes─┬─Controversial_ratio─┐ │ 583177 │ VB.NET Infinite For Loop │ 12 │ 12 │ 0 │ │ 9756797 │ Read console input as enumerable - one statement? │ 16 │ 16 │ 0 │ │ 13329132 │ What's the point of ARGV in Ruby? │ 22 │ 22 │ 0 │ └──────────┴───────────────────────────────────────────────────┴─────────┴───────────┴─────────────────────┘ 3 rows in set. Elapsed: 4.779 sec. Processed 298.80 million rows, 3.16 GB (62.52 million rows/s., 661.05 MB/s.) Peak memory usage: 6.05 GiB. ``` Attribution {#attribution} We thank Stack Overflow for providing this data under the cc-by-sa 4.0 license, acknowledging their efforts and the original source of the data at https://archive.org/details/stackexchange .
{"source_file": "stackoverflow.md"}
[ -0.04462939128279686, -0.029131973162293434, -0.12782156467437744, 0.06131921336054802, 0.0594952218234539, -0.01720597594976425, 0.0855984091758728, 0.014826656319200993, 0.0045952280052006245, 0.05618491768836975, 0.029355114325881004, 0.06190676614642143, 0.029536942020058632, -0.089220...
aadd407c-f159-49d2-8859-7017868807f2
description: 'A benchmark dataset used for comparing the performance of data warehousing solutions.' sidebar_label: 'AMPLab big data benchmark' slug: /getting-started/example-datasets/amplab-benchmark title: 'AMPLab Big Data Benchmark' keywords: ['AMPLab benchmark', 'big data benchmark', 'data warehousing performance', 'benchmark dataset', 'getting started'] doc_type: 'guide' See https://amplab.cs.berkeley.edu/benchmark/ Sign up for a free account at https://aws.amazon.com. It requires a credit card, email, and phone number. Get a new access key at https://console.aws.amazon.com/iam/home?nc2=h_m_sc#security_credential Run the following in the console: bash $ sudo apt-get install s3cmd $ mkdir tiny; cd tiny; $ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/tiny/ . $ cd .. $ mkdir 1node; cd 1node; $ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/1node/ . $ cd .. $ mkdir 5nodes; cd 5nodes; $ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/5nodes/ . $ cd .. Run the following ClickHouse queries: ```sql CREATE TABLE rankings_tiny ( pageURL String, pageRank UInt32, avgDuration UInt32 ) ENGINE = Log; CREATE TABLE uservisits_tiny ( sourceIP String, destinationURL String, visitDate Date, adRevenue Float32, UserAgent String, cCode FixedString(3), lCode FixedString(6), searchWord String, duration UInt32 ) ENGINE = MergeTree(visitDate, visitDate, 8192); CREATE TABLE rankings_1node ( pageURL String, pageRank UInt32, avgDuration UInt32 ) ENGINE = Log; CREATE TABLE uservisits_1node ( sourceIP String, destinationURL String, visitDate Date, adRevenue Float32, UserAgent String, cCode FixedString(3), lCode FixedString(6), searchWord String, duration UInt32 ) ENGINE = MergeTree(visitDate, visitDate, 8192); CREATE TABLE rankings_5nodes_on_single ( pageURL String, pageRank UInt32, avgDuration UInt32 ) ENGINE = Log; CREATE TABLE uservisits_5nodes_on_single ( sourceIP String, destinationURL String, visitDate Date, adRevenue Float32, UserAgent String, cCode FixedString(3), lCode FixedString(6), searchWord String, duration UInt32 ) ENGINE = MergeTree(visitDate, visitDate, 8192); ``` Go back to the console:
{"source_file": "amplab-benchmark.md"}
[ -0.012041634880006313, -0.005667226389050484, -0.07685990631580353, 0.006834692787379026, -0.005395431071519852, 0.009529950097203255, -0.01929505355656147, -0.02596629224717617, -0.007739698980003595, 0.08619223535060883, 0.03293373063206673, -0.06020122393965721, 0.08382884413003922, -0....
2917c397-ab3e-416a-a0cd-7589e531880d
Go back to the console: bash $ for i in tiny/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_tiny FORMAT CSV"; done $ for i in tiny/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_tiny FORMAT CSV"; done $ for i in 1node/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_1node FORMAT CSV"; done $ for i in 1node/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_1node FORMAT CSV"; done $ for i in 5nodes/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_5nodes_on_single FORMAT CSV"; done $ for i in 5nodes/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_5nodes_on_single FORMAT CSV"; done Queries for obtaining data samples: ```sql SELECT pageURL, pageRank FROM rankings_1node WHERE pageRank > 1000 SELECT substring(sourceIP, 1, 8), sum(adRevenue) FROM uservisits_1node GROUP BY substring(sourceIP, 1, 8) SELECT sourceIP, sum(adRevenue) AS totalRevenue, avg(pageRank) AS pageRank FROM rankings_1node ALL INNER JOIN ( SELECT sourceIP, destinationURL AS pageURL, adRevenue FROM uservisits_1node WHERE (visitDate > '1980-01-01') AND (visitDate < '1980-04-01') ) USING pageURL GROUP BY sourceIP ORDER BY totalRevenue DESC LIMIT 1 ```
{"source_file": "amplab-benchmark.md"}
[ -0.004439384676516056, 0.041020508855581284, -0.0610155388712883, 0.03499449044466019, 0.031902994960546494, -0.04726308956742287, 0.04527788981795311, 0.058893825858831406, -0.027491049841046333, 0.0014321791240945458, -0.018497701734304428, -0.04958995431661606, 0.07146207243204117, -0.0...
89e4f29d-619b-4d69-af7a-f350009ff9a6
description: 'Dataset containing 1.3 million records of historical data on the menus of hotels, restaurants and cafes with the dishes along with their prices.' sidebar_label: 'New York Public Library "what''s on the menu?" dataset' slug: /getting-started/example-datasets/menus title: 'New York Public Library "What''s on the Menu?" Dataset' doc_type: 'guide' keywords: ['example dataset', 'menus', 'historical data', 'sample data', 'nypl'] The dataset is created by the New York Public Library. It contains historical data on the menus of hotels, restaurants and cafes with the dishes along with their prices. Source: http://menus.nypl.org/data The data is in public domain. The data is from library's archive and it may be incomplete and difficult for statistical analysis. Nevertheless it is also very yummy. The size is just 1.3 million records about dishes in the menus — it's a very small data volume for ClickHouse, but it's still a good example. Download the dataset {#download-dataset} Run the command: ```bash wget https://s3.amazonaws.com/menusdata.nypl.org/gzips/2021_08_01_07_01_17_data.tgz Option: Validate the checksum md5sum 2021_08_01_07_01_17_data.tgz Checksum should be equal to: db6126724de939a5481e3160a2d67d15 ``` Replace the link to the up to date link from http://menus.nypl.org/data if needed. Download size is about 35 MB. Unpack the dataset {#unpack-dataset} bash tar xvf 2021_08_01_07_01_17_data.tgz Uncompressed size is about 150 MB. The data is normalized consisted of four tables: - Menu — Information about menus: the name of the restaurant, the date when menu was seen, etc. - Dish — Information about dishes: the name of the dish along with some characteristic. - MenuPage — Information about the pages in the menus, because every page belongs to some menu. - MenuItem — An item of the menu. A dish along with its price on some menu page: links to dish and menu page. Create the tables {#create-tables} We use Decimal data type to store prices. ```sql CREATE TABLE dish ( id UInt32, name String, description String, menus_appeared UInt32, times_appeared Int32, first_appeared UInt16, last_appeared UInt16, lowest_price Decimal64(3), highest_price Decimal64(3) ) ENGINE = MergeTree ORDER BY id; CREATE TABLE menu ( id UInt32, name String, sponsor String, event String, venue String, place String, physical_description String, occasion String, notes String, call_number String, keywords String, language String, date String, location String, location_type String, currency String, currency_symbol String, status String, page_count UInt16, dish_count UInt16 ) ENGINE = MergeTree ORDER BY id; CREATE TABLE menu_page ( id UInt32, menu_id UInt32, page_number UInt16, image_id String, full_height UInt16, full_width UInt16, uuid UUID ) ENGINE = MergeTree ORDER BY id;
{"source_file": "menus.md"}
[ 0.014126772060990334, -0.02661065012216568, -0.01938910409808159, 0.06018206477165222, -0.02126680687069893, -0.04478012025356293, -0.03740381449460983, -0.048932965844869614, -0.025671174749732018, -0.03244480490684509, 0.052993424236774445, 0.015272696502506733, -0.02129611000418663, -0....
4eb7f293-bf84-4b97-a3b3-47191e86a649
CREATE TABLE menu_item ( id UInt32, menu_page_id UInt32, price Decimal64(3), high_price Decimal64(3), dish_id UInt32, created_at DateTime, updated_at DateTime, xpos Float64, ypos Float64 ) ENGINE = MergeTree ORDER BY id; ``` Import the data {#import-data} Upload data into ClickHouse, run: bash clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --query "INSERT INTO dish FORMAT CSVWithNames" < Dish.csv clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --query "INSERT INTO menu FORMAT CSVWithNames" < Menu.csv clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --query "INSERT INTO menu_page FORMAT CSVWithNames" < MenuPage.csv clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --date_time_input_format best_effort --query "INSERT INTO menu_item FORMAT CSVWithNames" < MenuItem.csv We use CSVWithNames format as the data is represented by CSV with header. We disable format_csv_allow_single_quotes as only double quotes are used for data fields and single quotes can be inside the values and should not confuse the CSV parser. We disable input_format_null_as_default as our data does not have NULL . Otherwise ClickHouse will try to parse \N sequences and can be confused with \ in data. The setting date_time_input_format best_effort allows to parse DateTime fields in wide variety of formats. For example, ISO-8601 without seconds like '2000-01-01 01:02' will be recognized. Without this setting only fixed DateTime format is allowed. Denormalize the data {#denormalize-data} Data is presented in multiple tables in normalized form . It means you have to perform JOIN if you want to query, e.g. dish names from menu items. For typical analytical tasks it is way more efficient to deal with pre-JOINed data to avoid doing JOIN every time. It is called "denormalized" data. We will create a table menu_item_denorm where will contain all the data JOINed together:
{"source_file": "menus.md"}
[ 0.029685137793421745, 0.013021045364439487, -0.045838162302970886, 0.014542915858328342, -0.09279525279998779, -0.012507851235568523, -0.022189384326338768, 0.00649477681145072, -0.00907893292605877, 0.05311179533600807, 0.02655407413840294, -0.06782548129558563, 0.07486134767532349, -0.10...
9b8de275-e8ad-46a9-a799-4aee2f41b163
We will create a table menu_item_denorm where will contain all the data JOINed together: sql CREATE TABLE menu_item_denorm ENGINE = MergeTree ORDER BY (dish_name, created_at) AS SELECT price, high_price, created_at, updated_at, xpos, ypos, dish.id AS dish_id, dish.name AS dish_name, dish.description AS dish_description, dish.menus_appeared AS dish_menus_appeared, dish.times_appeared AS dish_times_appeared, dish.first_appeared AS dish_first_appeared, dish.last_appeared AS dish_last_appeared, dish.lowest_price AS dish_lowest_price, dish.highest_price AS dish_highest_price, menu.id AS menu_id, menu.name AS menu_name, menu.sponsor AS menu_sponsor, menu.event AS menu_event, menu.venue AS menu_venue, menu.place AS menu_place, menu.physical_description AS menu_physical_description, menu.occasion AS menu_occasion, menu.notes AS menu_notes, menu.call_number AS menu_call_number, menu.keywords AS menu_keywords, menu.language AS menu_language, menu.date AS menu_date, menu.location AS menu_location, menu.location_type AS menu_location_type, menu.currency AS menu_currency, menu.currency_symbol AS menu_currency_symbol, menu.status AS menu_status, menu.page_count AS menu_page_count, menu.dish_count AS menu_dish_count FROM menu_item JOIN dish ON menu_item.dish_id = dish.id JOIN menu_page ON menu_item.menu_page_id = menu_page.id JOIN menu ON menu_page.menu_id = menu.id; Validate the data {#validate-data} Query: sql SELECT count() FROM menu_item_denorm; Result: text ┌─count()─┐ │ 1329175 │ └─────────┘ Run some queries {#run-queries} Averaged historical prices of dishes {#query-averaged-historical-prices} Query: sql SELECT round(toUInt32OrZero(extract(menu_date, '^\\d{4}')), -1) AS d, count(), round(avg(price), 2), bar(avg(price), 0, 100, 100) FROM menu_item_denorm WHERE (menu_currency = 'Dollars') AND (d > 0) AND (d < 2022) GROUP BY d ORDER BY d ASC; Result:
{"source_file": "menus.md"}
[ 0.04972713068127632, -0.0052758376114070415, 0.03126174211502075, 0.022349229082465172, -0.11134755611419678, -0.041168197989463806, 0.028642011806368828, 0.001489518559537828, -0.004906664602458477, 0.0048579927533864975, 0.05011281371116638, -0.04082528501749039, 0.05153612792491913, -0....
27709b77-fab1-4c82-b740-8e80866f824e
Result: text ┌────d─┬─count()─┬─round(avg(price), 2)─┬─bar(avg(price), 0, 100, 100)─┐ │ 1850 │ 618 │ 1.5 │ █▍ │ │ 1860 │ 1634 │ 1.29 │ █▎ │ │ 1870 │ 2215 │ 1.36 │ █▎ │ │ 1880 │ 3909 │ 1.01 │ █ │ │ 1890 │ 8837 │ 1.4 │ █▍ │ │ 1900 │ 176292 │ 0.68 │ ▋ │ │ 1910 │ 212196 │ 0.88 │ ▊ │ │ 1920 │ 179590 │ 0.74 │ ▋ │ │ 1930 │ 73707 │ 0.6 │ ▌ │ │ 1940 │ 58795 │ 0.57 │ ▌ │ │ 1950 │ 41407 │ 0.95 │ ▊ │ │ 1960 │ 51179 │ 1.32 │ █▎ │ │ 1970 │ 12914 │ 1.86 │ █▋ │ │ 1980 │ 7268 │ 4.35 │ ████▎ │ │ 1990 │ 11055 │ 6.03 │ ██████ │ │ 2000 │ 2467 │ 11.85 │ ███████████▋ │ │ 2010 │ 597 │ 25.66 │ █████████████████████████▋ │ └──────┴─────────┴──────────────────────┴──────────────────────────────┘ Take it with a grain of salt. Burger prices {#query-burger-prices} Query: sql SELECT round(toUInt32OrZero(extract(menu_date, '^\\d{4}')), -1) AS d, count(), round(avg(price), 2), bar(avg(price), 0, 50, 100) FROM menu_item_denorm WHERE (menu_currency = 'Dollars') AND (d > 0) AND (d < 2022) AND (dish_name ILIKE '%burger%') GROUP BY d ORDER BY d ASC; Result:
{"source_file": "menus.md"}
[ -0.04816589131951332, 0.06083197146654129, 0.019410349428653717, 0.0467778742313385, 0.006384065840393305, -0.036016445606946945, 0.06004054471850395, 0.004402699880301952, 0.050768088549375534, 0.016238965094089508, 0.12048011273145676, -0.0368637852370739, 0.07903485000133514, -0.0408255...
f64559fa-c774-4ac1-b398-f85d8d8b13f0
Result: text ┌────d─┬─count()─┬─round(avg(price), 2)─┬─bar(avg(price), 0, 50, 100)───────────┐ │ 1880 │ 2 │ 0.42 │ ▋ │ │ 1890 │ 7 │ 0.85 │ █▋ │ │ 1900 │ 399 │ 0.49 │ ▊ │ │ 1910 │ 589 │ 0.68 │ █▎ │ │ 1920 │ 280 │ 0.56 │ █ │ │ 1930 │ 74 │ 0.42 │ ▋ │ │ 1940 │ 119 │ 0.59 │ █▏ │ │ 1950 │ 134 │ 1.09 │ ██▏ │ │ 1960 │ 272 │ 0.92 │ █▋ │ │ 1970 │ 108 │ 1.18 │ ██▎ │ │ 1980 │ 88 │ 2.82 │ █████▋ │ │ 1990 │ 184 │ 3.68 │ ███████▎ │ │ 2000 │ 21 │ 7.14 │ ██████████████▎ │ │ 2010 │ 6 │ 18.42 │ ████████████████████████████████████▋ │ └──────┴─────────┴──────────────────────┴───────────────────────────────────────┘ Vodka {#query-vodka} Query: sql SELECT round(toUInt32OrZero(extract(menu_date, '^\\d{4}')), -1) AS d, count(), round(avg(price), 2), bar(avg(price), 0, 50, 100) FROM menu_item_denorm WHERE (menu_currency IN ('Dollars', '')) AND (d > 0) AND (d < 2022) AND (dish_name ILIKE '%vodka%') GROUP BY d ORDER BY d ASC; Result: text ┌────d─┬─count()─┬─round(avg(price), 2)─┬─bar(avg(price), 0, 50, 100)─┐ │ 1910 │ 2 │ 0 │ │ │ 1920 │ 1 │ 0.3 │ ▌ │ │ 1940 │ 21 │ 0.42 │ ▋ │ │ 1950 │ 14 │ 0.59 │ █▏ │ │ 1960 │ 113 │ 2.17 │ ████▎ │ │ 1970 │ 37 │ 0.68 │ █▎ │ │ 1980 │ 19 │ 2.55 │ █████ │ │ 1990 │ 86 │ 3.6 │ ███████▏ │ │ 2000 │ 2 │ 3.98 │ ███████▊ │ └──────┴─────────┴──────────────────────┴─────────────────────────────┘ To get vodka we have to write ILIKE '%vodka%' and this definitely makes a statement. Caviar {#query-caviar} Let's print caviar prices. Also let's print a name of any dish with caviar. Query: sql SELECT round(toUInt32OrZero(extract(menu_date, '^\\d{4}')), -1) AS d, count(), round(avg(price), 2), bar(avg(price), 0, 50, 100), any(dish_name) FROM menu_item_denorm WHERE (menu_currency IN ('Dollars', '')) AND (d > 0) AND (d < 2022) AND (dish_name ILIKE '%caviar%') GROUP BY d ORDER BY d ASC;
{"source_file": "menus.md"}
[ -0.004482722375541925, 0.017811201512813568, -0.018437540158629417, 0.06402837485074997, -0.03168958052992821, 0.04962163418531418, 0.09908314049243927, 0.01081617921590805, 0.0322403609752655, 0.017202841117978096, 0.03657202795147896, -0.09758830815553665, 0.006497527938336134, -0.058473...
1a29728f-2a13-4dd1-8c78-c95d1b2a40f1
Result:
{"source_file": "menus.md"}
[ 0.007483392022550106, 0.1033942848443985, -0.02604025974869728, 0.038094211369752884, -0.0537632554769516, -0.004921757150441408, 0.03738363832235336, 0.11665063351392746, -0.05675116553902626, -0.003935408778488636, 0.09119946509599686, -0.02751018851995468, 0.03642028197646141, 0.0522339...
5a64c838-a89b-4e65-adb7-1dade2ab4fc6
text ┌────d─┬─count()─┬─round(avg(price), 2)─┬─bar(avg(price), 0, 50, 100)──────┬─any(dish_name)──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ 1090 │ 1 │ 0 │ │ Caviar │ │ 1880 │ 3 │ 0 │ │ Caviar │ │ 1890 │ 39 │ 0.59 │ █▏ │ Butter and caviar │ │ 1900 │ 1014 │ 0.34 │ ▋ │ Anchovy Caviar on Toast │ │ 1910 │ 1588 │ 1.35 │ ██▋ │ 1/1 Brötchen Caviar │ │ 1920 │ 927 │ 1.37 │ ██▋ │ ASTRAKAN CAVIAR │ │ 1930 │ 289 │ 1.91 │ ███▋ │ Astrachan caviar │ │ 1940 │ 201 │ 0.83 │ █▋ │ (SPECIAL) Domestic Caviar Sandwich │ │ 1950 │ 81 │ 2.27 │ ████▌ │ Beluga Caviar │ │ 1960 │ 126 │ 2.21 │ ████▍ │ Beluga Caviar │ │ 1970 │ 105 │ 0.95 │ █▊ │ BELUGA MALOSSOL CAVIAR AMERICAN DRESSING │ │ 1980 │ 12 │ 7.22 │ ██████████████▍ │ Authentic Iranian Beluga Caviar the world's finest black caviar presented in ice garni and a sampling of chilled 100° Russian vodka │ │ 1990 │ 74 │ 14.42 │ ████████████████████████████▋ │ Avocado Salad, Fresh cut avocado with caviare │
{"source_file": "menus.md"}
[ -0.05346336588263512, -0.009898931719362736, -0.01658625528216362, 0.03553564473986626, -0.07418005913496017, -0.015394525602459908, 0.03604670241475105, -0.017906220629811287, -0.00023219348804559559, -0.031808678060770035, 0.13472780585289001, -0.12545570731163025, 0.015855353325605392, ...
390c166f-ed50-4fcf-87dd-82b73ece06f0
│ 2000 │ 3 │ 7.82 │ ███████████████▋ │ Aufgeschlagenes Kartoffelsueppchen mit Forellencaviar │ │ 2010 │ 6 │ 15.58 │ ███████████████████████████████▏ │ "OYSTERS AND PEARLS" "Sabayon" of Pearl Tapioca with Island Creek Oysters and Russian Sevruga Caviar │ └──────┴─────────┴──────────────────────┴──────────────────────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
{"source_file": "menus.md"}
[ -0.007249005138874054, 0.027505991980433464, 0.038863375782966614, 0.02637113444507122, -0.05918068811297417, 0.04644770175218582, 0.04888466000556946, -0.0018960725283250213, 0.0007693198276683688, -0.03599543496966362, 0.0818987637758255, -0.10050923377275467, 0.02185722254216671, -0.063...
0123c92f-7eb6-412e-ac1e-40ab4b3d0db8
At least they have caviar with vodka. Very nice. Online playground {#playground} The data is uploaded to ClickHouse Playground, example .
{"source_file": "menus.md"}
[ -0.034649964421987534, -0.06154809147119522, -0.04118216782808304, -0.026053536683321, 0.022442681714892387, 0.022803789004683495, 0.016289375722408295, 0.013227308169007301, -0.022428324446082115, -0.07195915281772614, 0.015531257726252079, 0.0016512047732248902, 0.00432174326851964, 0.05...
8adc2cc1-d0ec-40bd-bc8e-908419028e58
description: 'Dataset containing 100 million vectors from the LAION 5B dataset' sidebar_label: 'LAION 5B dataset' slug: /getting-started/example-datasets/laion-5b-dataset title: 'LAION 5B dataset' keywords: ['semantic search', 'vector similarity', 'approximate nearest neighbours', 'embeddings'] doc_type: 'guide' import search_results_image from '@site/static/images/getting-started/example-datasets/laion5b_visualization_1.png' import Image from '@theme/IdealImage'; Introduction {#introduction} The LAION 5b dataset contains 5.85 billion image-text embeddings and associated image metadata. The embeddings were generated using Open AI CLIP model ViT-L/14 . The dimension of each embedding vector is 768 . This dataset can be used to model design, sizing and performance aspects for a large scale, real world vector search application. The dataset can be used for both text to image search and image to image search. Dataset details {#dataset-details} The complete dataset is available as a mixture of npy and Parquet files at the-eye.eu ClickHouse has made available a subset of 100 million vectors in a S3 bucket. The S3 bucket contains 10 Parquet files, each Parquet file is filled with 10 million rows. We recommend users first run a sizing exercise to estimate the storage and memory requirements for this dataset by referring to the documentation . Steps {#steps} Create table {#create-table} Create the laion_5b_100m table to store the embeddings and their associated attributes: sql CREATE TABLE laion_5b_100m ( id UInt32, image_path String, caption String, NSFW Nullable(String) default 'unknown', similarity Float32, LICENSE Nullable(String), url String, key String, status LowCardinality(String), width Int32, height Int32, original_width Int32, original_height Int32, exif Nullable(String), md5 String, vector Array(Float32) CODEC(NONE) ) ENGINE = MergeTree ORDER BY (id) The id is just an incrementing integer. The additional attributes can be used in predicates to understand vector similarity search combined with post-filtering/pre-filtering as explained in the documentation Load data {#load-table} To load the dataset from all Parquet files, run the following SQL statement: sql INSERT INTO laion_5b_100m SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/laion-5b/laion5b_100m_*.parquet'); The loading of 100 million rows into the table will take a few minutes. Alternatively, individual SQL statements can be run to load a specific number of files / rows. sql INSERT INTO laion_5b_100m SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/laion-5b/laion5b_100m_part_1_of_10.parquet'); INSERT INTO laion_5b_100m SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/laion-5b/laion5b_100m_part_2_of_10.parquet'); ⋮ Run a brute-force vector similarity search {#run-a-brute-force-vector-similarity-search}
{"source_file": "laion5b.md"}
[ -0.025919299572706223, -0.04346559941768646, -0.01157772820442915, 0.0028313128277659416, 0.09434307366609573, 0.0007430905825458467, -0.05732916295528412, 0.03750048577785492, 0.03215331956744194, -0.07474994659423828, 0.05953744053840637, -0.0052335369400680065, 0.003190485294908285, -0....
c329f147-93cc-4021-8bde-fcf696a7398e
Run a brute-force vector similarity search {#run-a-brute-force-vector-similarity-search} KNN (k - Nearest Neighbours) search or brute force search involves calculating the distance of each vector in the dataset to the search embedding vector and then ordering the distances to get the nearest neighbours. We can use one of the vectors from the dataset itself as the search vector. For example: ```sql title="Query" SELECT id, url FROM laion_5b_100m ORDER BY cosineDistance( vector, (SELECT vector FROM laion_5b_100m WHERE id = 9999) ) ASC LIMIT 20 The vector in the row with id = 9999 is the embedding for an image of a Deli restaurant. ```
{"source_file": "laion5b.md"}
[ 0.00010812010441441089, -0.04637664556503296, -0.009314987808465958, -0.07781725376844406, 0.02056848630309105, 0.007455630227923393, -0.0072828633710742, -0.07521859556436539, 0.01754099875688553, -0.05472985655069351, 0.02376488968729973, 0.011248166672885418, 0.10064314305782318, -0.032...
12fe909d-1524-4d41-b5e0-f823cea09931
```response title="Response" ┌───────id─┬─url───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ 1. │ 9999 │ https://certapro.com/belleville/wp-content/uploads/sites/1369/2017/01/McAlistersFairviewHgts.jpg │ 2. │ 60180509 │ https://certapro.com/belleville/wp-content/uploads/sites/1369/2017/01/McAlistersFairviewHgts-686x353.jpg │ 3. │ 1986089 │ https://www.gannett-cdn.com/-mm-/ceefab710d945bb3432c840e61dce6c3712a7c0a/c=30-0-4392-3280/local/-/media/2017/02/14/FortMyers/FortMyers/636226855169587730-McAlister-s-Exterior-Signage.jpg?width=534&height=401&fit=crop │ 4. │ 51559839 │ https://img1.mashed.com/img/gallery/how-rich-is-the-mcalisters-deli-ceo-and-whats-the-average-pay-of-its-employees/intro-1619793841.jpg │ 5. │ 22104014 │ https://www.restaurantmagazine.com/wp-content/uploads/2016/04/Largest-McAlisters-Deli-Franchisee-to-Expand-into-Nebraska.jpg │ 6. │ 54337236 │ http://www.restaurantnews.com/wp-content/uploads/2015/11/McAlisters-Deli-Giving-Away-Gift-Cards-With-Win-One-Gift-One-Holiday-Promotion.jpg │ 7. │ 20770867 │ http://www.restaurantnews.com/wp-content/uploads/2016/04/McAlisters-Deli-Aims-to-Attract-New-Franchisees-in-Florida-as-Chain-Enters-New-Markets.jpg │ 8. │ 22493966 │ https://www.restaurantmagazine.com/wp-content/uploads/2016/06/McAlisters-Deli-Aims-to-Attract-New-Franchisees-in-Columbus-Ohio-as-Chain-Expands-feature.jpg │ 9. │ 2224351 │ https://holttribe.com/wp-content/uploads/2019/10/60880046-879A-49E4-8E13-1EE75FB24980-900x675.jpeg │ 10. │ 30779663 │ https://www.gannett-cdn.com/presto/2018/10/29/PMUR/685f3e50-cce5-46fb-9a66-acb93f6ea5e5-IMG_6587.jpg?crop=2166,2166,x663,y0&width=80&height=80&fit=bounds │ 11. │ 54939148 │ https://www.priceedwards.com/sites/default/files/styles/staff_property_listing_block/public/for-lease/images/IMG_9674%20%28Custom%29_1.jpg?itok=sa8hrVBT │
{"source_file": "laion5b.md"}
[ -0.07256560027599335, 0.06834319978952408, 0.04615605250000954, -0.01643037423491478, 0.10277333110570908, -0.03818848729133606, -0.020111287012696266, 0.012332482263445854, 0.02968529798090458, 0.02945331111550331, 0.007435073610395193, -0.059861764311790466, 0.021922210231423378, 0.03277...
206d75b1-fd96-41ee-8212-b28e48d58383
12. │ 95371605 │ http://www.restaurantmagazine.com/wp-content/uploads/2015/08/McAlisters-Deli-Signs-Development-Agreement-with-Kingdom-Foods-to-Grow-in-Southern-Mississippi.jpg │ 13. │ 79564563 │ https://www.restaurantmagazine.com/wp-content/uploads/2016/05/McAlisters-Deli-Aims-to-Attract-New-Franchisees-in-Denver-as-Chain-Expands.jpg │ 14. │ 76429939 │ http://www.restaurantnews.com/wp-content/uploads/2016/08/McAlisters-Deli-Aims-to-Attract-New-Franchisees-in-Pennsylvania-as-Chain-Expands.jpg │ 15. │ 96680635 │ https://img.claz.org/tc/400x320/9w3hll-UQNHGB9WFlhSGAVCWhheBQkeWh5SBAkUWh9SBgsJFxRcBUMNSR4cAQENXhJARwgNTRYcBAtDWh5WRQEJXR5SR1xcFkYKR1tYFkYGR1pVFiVyP0ImaTA │ 16. │ 48716846 │ http://tse2.mm.bing.net/th?id=OIP.nN2qJqGUJs_fVNdTiFyGnQHaEc │ 17. │ 4472333 │ https://sgi.offerscdn.net/i/zdcs-merchants/05lG0FpXPIvsfiHnT3N8FQE.h200.w220.flpad.v22.bffffff.png │ 18. │ 82667887 │ https://irs2.4sqi.net/img/general/200x200/11154479_OEGbrkgWB5fEGrrTkktYvCj1gcdyhZn7TSQSAqN2Yqw.jpg │ 19. │ 57525607 │ https://knoji.com/images/logo/mcalistersdelicom.jpg │ 20. │ 15785896 │ https://www.groupnimb.com/mimg/merimg/mcalister-s-deli_1446088739.jpg │ └──────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
{"source_file": "laion5b.md"}
[ -0.029831862077116966, -0.05677713081240654, 0.07421695441007614, -0.0242166630923748, -0.03870689496397972, -0.025928469374775887, -0.07339634001255035, -0.0805910974740982, 0.03524506837129593, -0.010857117362320423, 0.10451507568359375, -0.04244714975357056, -0.019797654822468758, -0.00...
88c2a616-dd5d-457e-bf0f-f17a8fb980ab
highlight-next-line 20 rows in set. Elapsed: 3.968 sec. Processed 100.38 million rows, 320.81 GB (25.30 million rows/s., 80.84 GB/s.) ``` Note down the query latency so that we can compare it with the query latency of ANN (using vector index). With 100 million rows, the above query without a vector index could take a few seconds/minutes to complete. Build a vector similarity index {#build-vector-similarity-index} Run the following SQL to define and build a vector similarity index on the vector column of the laion_5b_100m table : ```sql ALTER TABLE laion_5b_100m ADD INDEX vector_index vector TYPE vector_similarity('hnsw', 'cosineDistance', 768, 'bf16', 64, 512); ALTER TABLE laion_5b_100m MATERIALIZE INDEX vector_index SETTINGS mutations_sync = 2; ``` The parameters and performance considerations for index creation and search are described in the documentation . The statement above uses values of 64 and 512 respectively for the HNSW hyperparameters M and ef_construction . Users need to carefully select optimal values for these parameters by evaluating index build time and search results quality corresponding to selected values. Building and saving the index could even take a few hours for the full l00 million dataset, depending on the number of CPU cores available and the storage bandwidth. Perform ANN search {#perform-ann-search} Once the vector similarity index has been built, vector search queries will automatically use the index: ```sql title="Query" SELECT id, url FROM laion_5b_100m ORDER BY cosineDistance( vector, (SELECT vector FROM laion_5b_100m WHERE id = 9999) ) ASC LIMIT 20 ``` The first time load of the vector index into memory could take a few seconds/minutes. Generate embeddings for search query {#generating-embeddings-for-search-query} The LAION 5b dataset embedding vectors were generated using OpenAI CLIP model ViT-L/14 . An example Python script is provided below to demonstrate how to programmatically generate embedding vectors using the CLIP APIs. The search embedding vector is then passed as an argument to the cosineDistance() function in the SELECT query. To install the clip package, please refer to the OpenAI GitHub repository . ```python import torch import clip import numpy as np import sys import clickhouse_connect device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load("ViT-L/14", device=device) Search for images that contain both a dog and a cat text = clip.tokenize(["a dog and a cat"]).to(device) with torch.no_grad(): text_features = model.encode_text(text) np_arr = text_features.detach().cpu().numpy() # Pass ClickHouse credentials here chclient = clickhouse_connect.get_client() params = {'v1': list(np_arr[0])} result = chclient.query("SELECT id, url FROM laion_5b_100m ORDER BY cosineDistance(vector, %(v1)s) LIMIT 100", parameters=params)
{"source_file": "laion5b.md"}
[ 0.015218972228467464, -0.014162149280309677, -0.04911211505532265, 0.010046599432826042, 0.03863164037466049, -0.020791012793779373, -0.054747916758060455, -0.04377353563904762, -0.019710658118128777, -0.025338441133499146, 0.005421491805464029, -0.004940899088978767, -0.06347338110208511, ...
69c4be1a-720b-4519-b6d8-83144d31751f
params = {'v1': list(np_arr[0])} result = chclient.query("SELECT id, url FROM laion_5b_100m ORDER BY cosineDistance(vector, %(v1)s) LIMIT 100", parameters=params) # Write the results to a simple HTML page that can be opened in the browser. Some URLs may have become obsolete. print("<html>") for r in result.result_rows: print("<img src = ", r[1], 'width="200" height="200">') print("</html>") ``` The result of the above search is shown below:
{"source_file": "laion5b.md"}
[ -0.02714540809392929, 0.13550950586795807, -0.07313954085111618, 0.03777614235877991, 0.0246144849807024, -0.024527983739972115, -0.03477257117629051, 0.0537831075489521, -0.022854559123516083, -0.0023093034978955984, 0.01734817959368229, 0.012825844809412956, 0.08802203834056854, -0.07742...
468eb19e-8d76-425c-b40a-12516474201d
description: 'Dataset containing 400 million images with English image captions' sidebar_label: 'Laion-400M dataset' slug: /getting-started/example-datasets/laion-400m-dataset title: 'Laion-400M dataset' doc_type: 'guide' keywords: ['example dataset', 'laion', 'image embeddings', 'sample data', 'machine learning'] The Laion-400M dataset contains 400 million images with English image captions. Laion nowadays provides an even larger dataset but working with it will be similar. The dataset contains the image URL, embeddings for both the image and the image caption, a similarity score between the image and the image caption, as well as metadata, e.g. the image width/height, the licence and a NSFW flag. We can use the dataset to demonstrate approximate nearest neighbor search in ClickHouse. Data preparation {#data-preparation} The embeddings and the metadata are stored in separate files in the raw data. A data preparation step downloads the data, merges the files, converts them to CSV and imports them into ClickHouse. You can use the following download.sh script for that: bash number=${1} if [[ $number == '' ]]; then number=1 fi; wget --tries=100 https://deploy.laion.ai/8f83b608504d46bb81708ec86e912220/embeddings/img_emb/img_emb_${number}.npy # download image embedding wget --tries=100 https://deploy.laion.ai/8f83b608504d46bb81708ec86e912220/embeddings/text_emb/text_emb_${number}.npy # download text embedding wget --tries=100 https://deploy.laion.ai/8f83b608504d46bb81708ec86e912220/embeddings/metadata/metadata_${number}.parquet # download metadata python3 process.py $number # merge files and convert to CSV Script process.py is defined as follows: ```python import pandas as pd import numpy as np import os import sys str_i = str(sys.argv[1]) npy_file = "img_emb_" + str_i + '.npy' metadata_file = "metadata_" + str_i + '.parquet' text_npy = "text_emb_" + str_i + '.npy' load all files im_emb = np.load(npy_file) text_emb = np.load(text_npy) data = pd.read_parquet(metadata_file) combine files data = pd.concat([data, pd.DataFrame({"image_embedding" : [ im_emb]}), pd.DataFrame({"text_embedding" : [ text_emb]})], axis=1, copy=False) columns to be imported into ClickHouse data = data[['url', 'caption', 'NSFW', 'similarity', "image_embedding", "text_embedding"]] transform np.arrays to lists data['image_embedding'] = data['image_embedding'].apply(lambda x: x.tolist()) data['text_embedding'] = data['text_embedding'].apply(lambda x: x.tolist()) this small hack is needed because caption sometimes contains all kind of quotes data['caption'] = data['caption'].apply(lambda x: x.replace("'", " ").replace('"', " ")) export data as CSV file data.to_csv(str_i + '.csv', header=False) removed raw data files os.system(f"rm {npy_file} {metadata_file} {text_npy}") ``` To start the data preparation pipeline, run: bash seq 0 409 | xargs -P1 -I{} bash -c './download.sh {}'
{"source_file": "laion.md"}
[ 0.003546610474586487, -0.08232348412275314, -0.05357813462615013, -0.03721196576952934, 0.10139476507902145, 0.0032659582793712616, -0.08307401090860367, 0.019370656460523605, 0.006189498119056225, -0.04640531912446022, 0.0683797225356102, -0.0375276617705822, 0.06851758062839508, 0.021739...
ed894c31-5a5a-4176-8fac-79d16f8d3178
removed raw data files os.system(f"rm {npy_file} {metadata_file} {text_npy}") ``` To start the data preparation pipeline, run: bash seq 0 409 | xargs -P1 -I{} bash -c './download.sh {}' The dataset is split into 410 files, each file contains ca. 1 million rows. If you like to work with a smaller subset of the data, simply adjust the limits, e.g. seq 0 9 | ... . (The python script above is very slow (~2-10 minutes per file), takes a lot of memory (41 GB per file), and the resulting csv files are big (10 GB each), so be careful. If you have enough RAM, increase the -P1 number for more parallelism. If this is still too slow, consider coming up with a better ingestion procedure - maybe converting the .npy files to parquet, then doing all the other processing with clickhouse.) Create table {#create-table} To create a table initially without indexes, run: sql CREATE TABLE laion ( `id` Int64, `url` String, `caption` String, `NSFW` String, `similarity` Float32, `image_embedding` Array(Float32), `text_embedding` Array(Float32) ) ENGINE = MergeTree ORDER BY id SETTINGS index_granularity = 8192 To import the CSV files into ClickHouse: sql INSERT INTO laion FROM INFILE '{path_to_csv_files}/*.csv' Note that the id column is just for illustration and is populated by the script with non-unique values. Run a brute-force vector similarity search {#run-a-brute-force-vector-similarity-search} To run a brute-force approximate vector search, run: sql SELECT url, caption FROM laion ORDER BY cosineDistance(image_embedding, {target:Array(Float32)}) LIMIT 10 target is an array of 512 elements and a client parameter. A convenient way to obtain such arrays will be presented at the end of the article. For now, we can run the embedding of a random LEGO set picture as target . Result
{"source_file": "laion.md"}
[ 0.010277839377522469, 0.042677637189626694, -0.07570677250623703, 0.004984061233699322, 0.0013032867573201656, -0.12813958525657654, -0.004250957164913416, 0.04040190950036049, -0.005421914160251617, 0.11166591942310333, 0.026593396440148354, 0.010647875256836414, 0.03214640915393829, -0.0...
60cd9082-b1db-48fe-a12c-9b68bc5a7e70
```markdown ┌─url───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─caption──────────────────────────────────────────────────────────────────────────┐ 1. │ https://s4.thcdn.com/productimg/600/600/11340490-9914447026352671.jpg │ LEGO Friends: Puppy Treats & Tricks (41304) │ 2. │ https://www.avenuedelabrique.com/img/uploads/f20fd44bfa4bd49f2a3a5fad0f0dfed7d53c3d2f.jpg │ Nouveau LEGO Friends 41334 Andrea s Park Performance 2018 │ 3. │ http://images.esellerpro.com/2489/I/667/303/3938_box_in.jpg │ 3938 LEGO Andreas Bunny House Girls Friends Heartlake Age 5-12 / 62 Pieces New! │ 4. │ http://i.shopmania.org/180x180/7/7f/7f1e1a2ab33cde6af4573a9e0caea61293dfc58d.jpg?u=https%3A%2F%2Fs.s-bol.com%2Fimgbase0%2Fimagebase3%2Fextralarge%2FFC%2F4%2F0%2F9%2F9%2F9200000049789904.jpg │ LEGO Friends Avonturenkamp Boomhuis - 41122 │ 5. │ https://s.s-bol.com/imgbase0/imagebase/large/FC/5/5/9/4/1004004011684955.jpg │ LEGO Friends Andrea s Theatershow - 3932 │ 6. │ https://www.jucariicucubau.ro/30252-home_default/41445-lego-friends-ambulanta-clinicii-veterinare.jpg │ 41445 - LEGO Friends - Ambulanta clinicii veterinare │ 7. │ https://cdn.awsli.com.br/600x1000/91/91201/produto/24833262/234c032725.jpg │ LEGO FRIENDS 41336 EMMA S ART CAFÉ │ 8. │ https://media.4rgos.it/s/Argos/6174930_R_SET?$Thumb150$&$Web$ │ more details on LEGO Friends Stephanie s Friendship Cake Set - 41308. │ 9. │ https://thumbs4.ebaystatic.com/d/l225/m/mG4k6qAONd10voI8NUUMOjw.jpg │ Lego Friends Gymnast 30400 Polybag 26 pcs │
{"source_file": "laion.md"}
[ -0.05542638897895813, -0.005656868685036898, 0.09136214107275009, -0.06483791023492813, -0.007761979475617409, 0.04853151738643646, 0.019663291051983833, -0.022306323051452637, -0.03868233039975166, 0.018644142895936966, 0.07000914961099625, -0.018147194758057594, 0.011523451656103134, 0.0...
5c369438-9efd-4c0b-aaaa-11e8cc23acd0
10. │ http://www.ibrickcity.com/wp-content/gallery/41057/thumbs/thumbs_lego-41057-heartlake-horse-show-friends-3.jpg │ lego-41057-heartlake-horse-show-friends-3 │ └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────────────────────────────────────────────────────────┘
{"source_file": "laion.md"}
[ -0.11354692280292511, 0.02860233187675476, 0.06911636888980865, -0.04152757301926613, 0.03564563766121864, 0.04875311627984047, 0.03726830333471298, 0.07025783509016037, -0.008763263933360577, -0.06363783031702042, 0.09812081605195999, -0.04579614847898483, 0.07423686236143112, 0.018552014...
3b9fb43b-dc1f-4991-a7c9-d7e7ede2d1cb
10 rows in set. Elapsed: 4.605 sec. Processed 100.38 million rows, 309.98 GB (21.80 million rows/s., 67.31 GB/s.) ``` Run an approximate vector similarity search with a vector similarity index {#run-an-approximate-vector-similarity-search-with-a-vector-similarity-index} Let's now define two vector similarity indexes on the table. sql ALTER TABLE laion ADD INDEX image_index image_embedding TYPE vector_similarity('hnsw', 'cosineDistance', 512, 'bf16', 64, 256) ALTER TABLE laion ADD INDEX text_index text_embedding TYPE vector_similarity('hnsw', 'cosineDistance', 512, 'bf16', 64, 256) The parameters and performance considerations for index creation and search are described in the documentation . The above index definition specifies a HNSW index using the "cosine distance" as distance metric with the parameter "hnsw_max_connections_per_layer" set to 64 and parameter "hnsw_candidate_list_size_for_construction" set to 256. The index uses half-precision brain floats (bfloat16) as quantization to optimize memory usage. To build and materialize the index, run these statements : sql ALTER TABLE laion MATERIALIZE INDEX image_index; ALTER TABLE laion MATERIALIZE INDEX text_index; Building and saving the index could take a few minutes or even hours, depending on the number of rows and HNSW index parameters. To perform a vector search, just execute the same query again: sql SELECT url, caption FROM laion ORDER BY cosineDistance(image_embedding, {target:Array(Float32)}) LIMIT 10 Result
{"source_file": "laion.md"}
[ 0.024062365293502808, -0.023934632539749146, -0.04856732115149498, 0.005118084605783224, 0.01669824868440628, 0.01980140432715416, -0.03104589693248272, 0.017098646610975266, -0.057377010583877563, -0.0512339249253273, 0.018569715321063995, 0.0059904493391513824, 0.030732419341802597, -0.0...
4fbffb04-2826-44b6-8a3f-1d121524d638
```response ┌─url───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─caption──────────────────────────────────────────────────────────────────────────┐ 1. │ https://s4.thcdn.com/productimg/600/600/11340490-9914447026352671.jpg │ LEGO Friends: Puppy Treats & Tricks (41304) │ 2. │ https://www.avenuedelabrique.com/img/uploads/f20fd44bfa4bd49f2a3a5fad0f0dfed7d53c3d2f.jpg │ Nouveau LEGO Friends 41334 Andrea s Park Performance 2018 │ 3. │ http://images.esellerpro.com/2489/I/667/303/3938_box_in.jpg │ 3938 LEGO Andreas Bunny House Girls Friends Heartlake Age 5-12 / 62 Pieces New! │ 4. │ http://i.shopmania.org/180x180/7/7f/7f1e1a2ab33cde6af4573a9e0caea61293dfc58d.jpg?u=https%3A%2F%2Fs.s-bol.com%2Fimgbase0%2Fimagebase3%2Fextralarge%2FFC%2F4%2F0%2F9%2F9%2F9200000049789904.jpg │ LEGO Friends Avonturenkamp Boomhuis - 41122 │ 5. │ https://s.s-bol.com/imgbase0/imagebase/large/FC/5/5/9/4/1004004011684955.jpg │ LEGO Friends Andrea s Theatershow - 3932 │ 6. │ https://www.jucariicucubau.ro/30252-home_default/41445-lego-friends-ambulanta-clinicii-veterinare.jpg │ 41445 - LEGO Friends - Ambulanta clinicii veterinare │ 7. │ https://cdn.awsli.com.br/600x1000/91/91201/produto/24833262/234c032725.jpg │ LEGO FRIENDS 41336 EMMA S ART CAFÉ │ 8. │ https://media.4rgos.it/s/Argos/6174930_R_SET?$Thumb150$&$Web$ │ more details on LEGO Friends Stephanie s Friendship Cake Set - 41308. │ 9. │ https://thumbs4.ebaystatic.com/d/l225/m/mG4k6qAONd10voI8NUUMOjw.jpg │ Lego Friends Gymnast 30400 Polybag 26 pcs │
{"source_file": "laion.md"}
[ -0.0641222596168518, -0.010433685965836048, 0.09232150763273239, -0.060165807604789734, -0.016785413026809692, 0.03549593314528465, 0.028959309682250023, -0.019841652363538742, -0.030058523640036583, 0.0202629454433918, 0.07869837433099747, -0.03624383360147476, 0.015212235040962696, 0.054...
a47d43ae-c824-4250-9971-8dc308518dd1
10. │ http://www.ibrickcity.com/wp-content/gallery/41057/thumbs/thumbs_lego-41057-heartlake-horse-show-friends-3.jpg │ lego-41057-heartlake-horse-show-friends-3 │ └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────────────────────────────────────────────────────────┘
{"source_file": "laion.md"}
[ -0.11354692280292511, 0.02860233187675476, 0.06911636888980865, -0.04152757301926613, 0.03564563766121864, 0.04875311627984047, 0.03726830333471298, 0.07025783509016037, -0.008763263933360577, -0.06363783031702042, 0.09812081605195999, -0.04579614847898483, 0.07423686236143112, 0.018552014...
ac7b2420-79af-4e28-a9d8-1136d0ed4f69
10 rows in set. Elapsed: 0.019 sec. Processed 137.27 thousand rows, 24.42 MB (7.38 million rows/s., 1.31 GB/s.) ``` The query latency decreased significantly because the nearest neighbours were retrieved using the vector index. Vector similarity search using a vector similarity index may return results that differ slightly from the brute-force search results. An HNSW index can potentially achieve a recall close to 1 (same accuracy as brute force search) with a careful selection of the HNSW parameters and evaluating the index quality. Creating embeddings with UDFs {#creating-embeddings-with-udfs} One usually wants to create embeddings for new images or new image captions and search for similar image / image caption pairs in the data. We can use UDF to create the target vector without leaving the client. It is important to use the same model to create the data and new embeddings for searches. The following scripts utilize the ViT-B/32 model which also underlies the dataset. Text embeddings {#text-embeddings} First, store the following Python script in the user_scripts/ directory of your ClickHouse data path and make it executable ( chmod +x encode_text.py ). encode_text.py : ```python !/usr/bin/python3 !Note: Change the above python3 executable location if a virtual env is being used. import clip import torch import numpy as np import sys if name == ' main ': device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load("ViT-B/32", device=device) for text in sys.stdin: inputs = clip.tokenize(text) with torch.no_grad(): text_features = model.encode_text(inputs)[0].tolist() print(text_features) sys.stdout.flush() ``` Then create encode_text_function.xml in a location referenced by <user_defined_executable_functions_config>/path/to/*_function.xml</user_defined_executable_functions_config> in your ClickHouse server configuration file. xml <functions> <function> <type>executable</type> <name>encode_text</name> <return_type>Array(Float32)</return_type> <argument> <type>String</type> <name>text</name> </argument> <format>TabSeparated</format> <command>encode_text.py</command> <command_read_timeout>1000000</command_read_timeout> </function> </functions> You can now simply use: sql SELECT encode_text('cat'); The first run will be slow because it loads the model, but repeated runs will be fast. We can then copy the output to SET param_target=... and can easily write queries. Alternatively, the encode_text() function can directly be used as a argument to the cosineDistance function : SQL SELECT url FROM laion ORDER BY cosineDistance(text_embedding, encode_text('a dog and a cat')) ASC LIMIT 10 Note that the encode_text() UDF itself could require a few seconds to compute and emit the embedding vector.
{"source_file": "laion.md"}
[ -0.01088802982121706, -0.0908539816737175, -0.01838628575205803, 0.025070613250136375, 0.07087680697441101, -0.009478495456278324, -0.08165387064218521, -0.0017540762200951576, 0.009156458079814911, -0.037313416600227356, 0.01582431048154831, 0.011796493083238602, 0.06899683177471161, 0.01...
e15c3d28-f8c3-43e0-b3ca-0f10b0602f7a
Note that the encode_text() UDF itself could require a few seconds to compute and emit the embedding vector. Image embeddings {#image-embeddings} Image embeddings can be created similarly and we provide a Python script that can generate an embedding of an image stored locally as a file. encode_image.py ```python !/usr/bin/python3 !Note: Change the above python3 executable location if a virtual env is being used. import clip import torch import numpy as np from PIL import Image import sys if name == ' main ': device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load("ViT-B/32", device=device) for text in sys.stdin: image = preprocess(Image.open(text.strip())).unsqueeze(0).to(device) with torch.no_grad(): image_features = model.encode_image(image)[0].tolist() print(image_features) sys.stdout.flush() ``` encode_image_function.xml xml <functions> <function> <type>executable_pool</type> <name>encode_image</name> <return_type>Array(Float32)</return_type> <argument> <type>String</type> <name>path</name> </argument> <format>TabSeparated</format> <command>encode_image.py</command> <command_read_timeout>1000000</command_read_timeout> </function> </functions> Fetch an example image to search : ```shell get a random image of a LEGO set $ wget http://cdn.firstcry.com/brainbees/images/products/thumb/191325a.jpg ``` Then run this query to generate the embedding for above image : sql SELECT encode_image('/path/to/your/image'); The complete search query is : sql SELECT url, caption FROM laion ORDER BY cosineDistance(image_embedding, encode_image('/path/to/your/image')) ASC LIMIT 10
{"source_file": "laion.md"}
[ 0.006199095398187637, -0.07023455202579498, -0.0911303460597992, -0.007259007543325424, 0.06978198885917664, -0.05707506090402603, -0.032180625945329666, 0.021197177469730377, -0.01509921532124281, -0.06360428035259247, 0.04296047240495682, -0.024751979857683182, -0.04326861351728439, 0.03...
cb77531a-9008-421d-914c-df401b78d2f1
description: 'Dataset containing 28 million rows of hacker news data.' sidebar_label: 'Hacker news' slug: /getting-started/example-datasets/hacker-news title: 'Hacker News dataset' doc_type: 'guide' keywords: ['example dataset', 'hacker news', 'sample data', 'text analysis', 'vector search'] Hacker News dataset In this tutorial, you'll insert 28 million rows of Hacker News data into a ClickHouse table from both CSV and Parquet formats and run some simple queries to explore the data. CSV {#csv} Download CSV {#download} A CSV version of the dataset can be downloaded from our public S3 bucket , or by running this command: bash wget https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.csv.gz At 4.6GB, and 28m rows, this compressed file should take 5-10 minutes to download. Sample the data {#sampling} clickhouse-local allows users to perform fast processing on local files without having to deploy and configure the ClickHouse server. Before storing any data in ClickHouse, let's sample the file using clickhouse-local. From the console run: bash clickhouse-local Next, run the following command to explore the data: sql title="Query" SELECT * FROM file('hacknernews.csv.gz', CSVWithNames) LIMIT 2 SETTINGS input_format_try_infer_datetimes = 0 FORMAT Vertical ```response title="Response" Row 1: ────── id: 344065 deleted: 0 type: comment by: callmeed time: 2008-10-26 05:06:58 text: What kind of reports do you need? ActiveMerchant just connects your app to a gateway for cc approval and processing. Braintree has very nice reports on transactions and it's very easy to refund a payment. Beyond that, you are dealing with Rails after all–it's pretty easy to scaffold out some reports from your subscriber base. dead: 0 parent: 344038 poll: 0 kids: [] url: score: 0 title: parts: [] descendants: 0 Row 2: ────── id: 344066 deleted: 0 type: story by: acangiano time: 2008-10-26 05:07:59 text: dead: 0 parent: 0 poll: 0 kids: [344111,344202,344329,344606] url: http://antoniocangiano.com/2008/10/26/what-arc-should-learn-from-ruby/ score: 33 title: What Arc should learn from Ruby parts: [] descendants: 10 ``` There are a lot of subtle capabilities in this command. The file operator allows you to read the file from a local disk, specifying only the format CSVWithNames . Most importantly, the schema is automatically inferred for you from the file contents. Note also how clickhouse-local is able to read the compressed file, inferring the gzip format from the extension. The Vertical format is used to more easily see the data for each column. Load the data with schema inference {#loading-the-data}
{"source_file": "hacker-news.md"}
[ 0.004003259353339672, 0.005418583285063505, -0.10739181190729141, 0.0005864157574251294, 0.0243636853992939, -0.10444851964712143, -0.031227268278598785, -0.03792453557252884, -0.008329039439558983, 0.09241097420454025, 0.06118685007095337, 0.0022397104185074568, 0.07053713500499725, -0.12...
e4aa1638-8cbd-4921-8968-aa02bbd0865a
Load the data with schema inference {#loading-the-data} The simplest and most powerful tool for data loading is the clickhouse-client : a feature-rich native command-line client. To load data, you can again exploit schema inference, relying on ClickHouse to determine the types of the columns. Run the following command to create a table and insert the data directly from the remote CSV file, accessing the contents via the url function. The schema is automatically inferred: sql CREATE TABLE hackernews ENGINE = MergeTree ORDER BY tuple ( ) EMPTY AS SELECT * FROM url('https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.csv.gz', 'CSVWithNames'); This creates an empty table using the schema inferred from the data. The DESCRIBE TABLE command allows us to understand these assigned types. sql title="Query" DESCRIBE TABLE hackernews text title="Response" ┌─name────────┬─type─────────────────────┬ │ id │ Nullable(Float64) │ │ deleted │ Nullable(Float64) │ │ type │ Nullable(String) │ │ by │ Nullable(String) │ │ time │ Nullable(String) │ │ text │ Nullable(String) │ │ dead │ Nullable(Float64) │ │ parent │ Nullable(Float64) │ │ poll │ Nullable(Float64) │ │ kids │ Array(Nullable(Float64)) │ │ url │ Nullable(String) │ │ score │ Nullable(Float64) │ │ title │ Nullable(String) │ │ parts │ Array(Nullable(Float64)) │ │ descendants │ Nullable(Float64) │ └─────────────┴──────────────────────────┴ To insert the data into this table, use the INSERT INTO, SELECT command. Together with the url function, data will be streamed directly from the URL: sql INSERT INTO hackernews SELECT * FROM url('https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.csv.gz', 'CSVWithNames') You've successfully inserted 28 million rows into ClickHouse with a single command! Explore the data {#explore} Sample the Hacker News stories and specific columns by running the following query: sql title="Query" SELECT id, title, type, by, time, url, score FROM hackernews WHERE type = 'story' LIMIT 3 FORMAT Vertical ```response title="Response" Row 1: ────── id: 2596866 title: type: story by: time: 1306685152 url: score: 0 Row 2: ────── id: 2596870 title: WordPress capture users last login date and time type: story by: wpsnipp time: 1306685252 url: http://wpsnipp.com/index.php/date/capture-users-last-login-date-and-time/ score: 1 Row 3: ────── id: 2596872 title: Recent college graduates get some startup wisdom type: story by: whenimgone time: 1306685352 url: http://articles.chicagotribune.com/2011-05-27/business/sc-cons-0526-started-20110527_1_business-plan-recession-college-graduates score: 1 ```
{"source_file": "hacker-news.md"}
[ 0.025334550067782402, -0.05521878972649574, -0.0972219705581665, 0.08097325265407562, -0.004874367266893387, -0.0425741970539093, -0.056211069226264954, -0.0189412422478199, 0.004524108488112688, 0.0663929134607315, 0.02107652835547924, -0.0469646193087101, 0.08770648390054703, -0.08324839...
9a635db2-8716-40b7-bab2-aa126fa093b6
While schema inference is a great tool for initial data exploration, it is “best effort” and not a long-term substitute for defining an optimal schema for your data. Define a schema {#define-a-schema} An obvious immediate optimization is to define a type for each field. In addition to declaring the time field as a DateTime type, we define an appropriate type for each of the fields below after dropping our existing dataset. In ClickHouse the primary key id for the data is defined via the ORDER BY clause. Selecting appropriate types and choosing which columns to include in the ORDER BY clause will help to improve query speed and compression. Run the query below to drop the old schema and create the improved schema: ```sql title="Query" DROP TABLE IF EXISTS hackernews; CREATE TABLE hackernews ( id UInt32, deleted UInt8, type Enum('story' = 1, 'comment' = 2, 'poll' = 3, 'pollopt' = 4, 'job' = 5), by LowCardinality(String), time DateTime, text String, dead UInt8, parent UInt32, poll UInt32, kids Array(UInt32), url String, score Int32, title String, parts Array(UInt32), descendants Int32 ) ENGINE = MergeTree ORDER BY id ``` With an optimized schema, you can now insert the data from the local file system. Again using clickhouse-client , insert the file using the INFILE clause with an explicit INSERT INTO . sql title="Query" INSERT INTO hackernews FROM INFILE '/data/hacknernews.csv.gz' FORMAT CSVWithNames Run sample queries {#run-sample-queries} Some sample queries are presented below to give you inspiration for writing your own queries. How pervasive a topic is "ClickHouse" in Hacker News? {#how-pervasive} The score field provides a metric of popularity for stories, while the id field and || concatenation operator can be used to produce a link to the original post. sql title="Query" SELECT time, score, descendants, title, url, 'https://news.ycombinator.com/item?id=' || toString(id) AS hn_url FROM hackernews WHERE (type = 'story') AND (title ILIKE '%ClickHouse%') ORDER BY score DESC LIMIT 5 FORMAT Vertical ```response title="Response" Row 1: ────── time: 1632154428 score: 519 descendants: 159 title: ClickHouse, Inc. url: https://github.com/ClickHouse/ClickHouse/blob/master/website/blog/en/2021/clickhouse-inc.md hn_url: https://news.ycombinator.com/item?id=28595419 Row 2: ────── time: 1614699632 score: 383 descendants: 134 title: ClickHouse as an alternative to Elasticsearch for log storage and analysis url: https://pixeljets.com/blog/clickhouse-vs-elasticsearch/ hn_url: https://news.ycombinator.com/item?id=26316401
{"source_file": "hacker-news.md"}
[ -0.006656309589743614, 0.002980608493089676, 0.04425283521413803, 0.06622186303138733, -0.005561886355280876, -0.016298094764351845, 0.034423213452100754, -0.04796749725937843, -0.032726116478443146, 0.018483176827430725, 0.008842035196721554, -0.02662510797381401, 0.01993465982377529, 0.0...
809bf756-0243-4fdc-a036-6f25257a2be4
Row 3: ────── time: 1465985177 score: 243 descendants: 70 title: ClickHouse – high-performance open-source distributed column-oriented DBMS url: https://clickhouse.yandex/reference_en.html hn_url: https://news.ycombinator.com/item?id=11908254 Row 4: ────── time: 1578331410 score: 216 descendants: 86 title: ClickHouse cost-efficiency in action: analyzing 500B rows on an Intel NUC url: https://www.altinity.com/blog/2020/1/1/clickhouse-cost-efficiency-in-action-analyzing-500-billion-rows-on-an-intel-nuc hn_url: https://news.ycombinator.com/item?id=21970952 Row 5: ────── time: 1622160768 score: 198 descendants: 55 title: ClickHouse: An open-source column-oriented database management system url: https://github.com/ClickHouse/ClickHouse hn_url: https://news.ycombinator.com/item?id=27310247 ``` Is ClickHouse generating more noise over time? Here the usefulness of defining the time field as a DateTime is shown, as using a proper data type allows you to use the toYYYYMM() function: sql title="Query" SELECT toYYYYMM(time) AS monthYear, bar(count(), 0, 120, 20) FROM hackernews WHERE (type IN ('story', 'comment')) AND ((title ILIKE '%ClickHouse%') OR (text ILIKE '%ClickHouse%')) GROUP BY monthYear ORDER BY monthYear ASC
{"source_file": "hacker-news.md"}
[ -0.05259043723344803, -0.08567401766777039, -0.1250600516796112, 0.014677268452942371, 0.002687873551622033, -0.07633010298013687, -0.04562316834926605, -0.0258503220975399, -0.04742090404033661, 0.0629434660077095, 0.00035915672197006643, 0.020620495080947876, 0.010806509293615818, -0.049...
075c9904-d06d-4831-b828-32a15c002414
response title="Response" ┌─monthYear─┬─bar(count(), 0, 120, 20)─┐ │ 201606 │ ██▎ │ │ 201607 │ ▏ │ │ 201610 │ ▎ │ │ 201612 │ ▏ │ │ 201701 │ ▎ │ │ 201702 │ █ │ │ 201703 │ ▋ │ │ 201704 │ █ │ │ 201705 │ ██ │ │ 201706 │ ▎ │ │ 201707 │ ▎ │ │ 201708 │ ▏ │ │ 201709 │ ▎ │ │ 201710 │ █▌ │ │ 201711 │ █▌ │ │ 201712 │ ▌ │ │ 201801 │ █▌ │ │ 201802 │ ▋ │ │ 201803 │ ███▏ │ │ 201804 │ ██▏ │ │ 201805 │ ▋ │ │ 201806 │ █▏ │ │ 201807 │ █▌ │ │ 201808 │ ▋ │ │ 201809 │ █▌ │ │ 201810 │ ███▌ │ │ 201811 │ ████ │ │ 201812 │ █▌ │ │ 201901 │ ████▋ │ │ 201902 │ ███ │ │ 201903 │ ▋ │ │ 201904 │ █ │ │ 201905 │ ███▋ │ │ 201906 │ █▏ │ │ 201907 │ ██▎ │ │ 201908 │ ██▋ │ │ 201909 │ █▋ │ │ 201910 │ █ │ │ 201911 │ ███ │ │ 201912 │ █▎ │ │ 202001 │ ███████████▋ │ │ 202002 │ ██████▌ │ │ 202003 │ ███████████▋ │ │ 202004 │ ███████▎ │ │ 202005 │ ██████▏ │ │ 202006 │ ██████▏ │ │ 202007 │ ███████▋ │ │ 202008 │ ███▋ │ │ 202009 │ ████ │ │ 202010 │ ████▌ │ │ 202011 │ █████▏ │ │ 202012 │ ███▋ │ │ 202101 │ ███▏ │ │ 202102 │ █████████ │ │ 202103 │ █████████████▋ │ │ 202104 │ ███▏ │ │ 202105 │ ████████████▋ │ │ 202106 │ ███ │ │ 202107 │ █████▏ │ │ 202108 │ ████▎ │ │ 202109 │ ██████████████████▎ │ │ 202110 │ ▏ │ └───────────┴──────────────────────────┘ It looks like "ClickHouse" is growing in popularity with time. Who are the top commenters on ClickHouse related articles? {#top-commenters}
{"source_file": "hacker-news.md"}
[ -0.010974260047078133, 0.0475260429084301, -0.008508974686264992, 0.05349775031208992, -0.055898867547512054, -0.019346872344613075, 0.007626266684383154, -0.03351190686225891, 0.03574484959244728, 0.04479402303695679, 0.04599659517407417, -0.04040772467851639, 0.08438511192798615, -0.0362...
77a418d9-99a2-43fa-9f2c-98b6346f743f
It looks like "ClickHouse" is growing in popularity with time. Who are the top commenters on ClickHouse related articles? {#top-commenters} sql title="Query" SELECT by, count() AS comments FROM hackernews WHERE (type IN ('story', 'comment')) AND ((title ILIKE '%ClickHouse%') OR (text ILIKE '%ClickHouse%')) GROUP BY by ORDER BY comments DESC LIMIT 5 response title="Response" ┌─by──────────┬─comments─┐ │ hodgesrm │ 78 │ │ zX41ZdbW │ 45 │ │ manigandham │ 39 │ │ pachico │ 35 │ │ valyala │ 27 │ └─────────────┴──────────┘ Which comments generate the most interest? {#comments-by-most-interest} sql title="Query" SELECT by, sum(score) AS total_score, sum(length(kids)) AS total_sub_comments FROM hackernews WHERE (type IN ('story', 'comment')) AND ((title ILIKE '%ClickHouse%') OR (text ILIKE '%ClickHouse%')) GROUP BY by ORDER BY total_score DESC LIMIT 5 response title="Response" ┌─by───────┬─total_score─┬─total_sub_comments─┐ │ zX41ZdbW │ 571 │ 50 │ │ jetter │ 386 │ 30 │ │ hodgesrm │ 312 │ 50 │ │ mechmind │ 243 │ 16 │ │ tosh │ 198 │ 12 │ └──────────┴─────────────┴────────────────────┘ Parquet {#parquet} One of the strengths of ClickHouse is its ability to handle any number of formats . CSV represents a rather ideal use case, and is not the most efficient for data exchange. Next, you'll load the data from a Parquet file which is an efficient column-oriented format. Parquet has minimal types, which ClickHouse needs to respect, and this type information is encoded in the format itself. Type inference on a Parquet file will invariably lead to a slightly different schema than the one for the CSV file. Insert the data {#insert-the-data} Run the following query to read the same data in Parquet format, again using the url function to read the remote data: ```sql DROP TABLE IF EXISTS hackernews; CREATE TABLE hackernews ENGINE = MergeTree ORDER BY id SETTINGS allow_nullable_key = 1 EMPTY AS SELECT * FROM url('https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.parquet', 'Parquet') INSERT INTO hackernews SELECT * FROM url('https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.parquet', 'Parquet') ``` :::note Null keys with Parquet As a condition of the Parquet format, we have to accept that keys might be NULL , even though they aren't in the data. ::: Run the following command to view the inferred schema:
{"source_file": "hacker-news.md"}
[ -0.008002969436347485, -0.06557104736566544, 0.030330264940857887, 0.12637606263160706, 0.015029257163405418, -0.01622055657207966, 0.08591211587190628, 0.022067280486226082, -0.0005395711632445455, 0.007931437343358994, 0.040655944496393204, 0.01873401179909706, 0.11695997416973114, 0.014...
4d427240-6e15-4b52-8e27-5665cc286c7d
Run the following command to view the inferred schema: sql title="Query" ┌─name────────┬─type───────────────────┬ │ id │ Nullable(Int64) │ │ deleted │ Nullable(UInt8) │ │ type │ Nullable(String) │ │ time │ Nullable(Int64) │ │ text │ Nullable(String) │ │ dead │ Nullable(UInt8) │ │ parent │ Nullable(Int64) │ │ poll │ Nullable(Int64) │ │ kids │ Array(Nullable(Int64)) │ │ url │ Nullable(String) │ │ score │ Nullable(Int32) │ │ title │ Nullable(String) │ │ parts │ Array(Nullable(Int64)) │ │ descendants │ Nullable(Int32) │ └─────────────┴────────────────────────┴ As before with the CSV file, you can specify the schema manually for greater control over the chosen types and insert the data directly from s3: ``sql CREATE TABLE hackernews ( id UInt64, deleted UInt8, type String, author String, timestamp DateTime, comment String, dead UInt8, parent UInt64, poll UInt64, children Array(UInt32), url String, score UInt32, title String, parts Array(UInt32), descendants` UInt32 ) ENGINE = MergeTree ORDER BY (type, author); INSERT INTO hackernews SELECT * FROM s3( 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.parquet', 'Parquet', 'id UInt64, deleted UInt8, type String, by String, time DateTime, text String, dead UInt8, parent UInt64, poll UInt64, kids Array(UInt32), url String, score UInt32, title String, parts Array(UInt32), descendants UInt32'); ``` Add a skipping-index to speed up queries {#add-skipping-index} To find out how many comments mention "ClickHouse", run the following query: sql title="Query" SELECT count(*) FROM hackernews WHERE hasToken(lower(comment), 'ClickHouse'); ```response title="Response" highlight-next-line 1 row in set. Elapsed: 0.843 sec. Processed 28.74 million rows, 9.75 GB (34.08 million rows/s., 11.57 GB/s.) ┌─count()─┐ │ 516 │ └─────────┘ ``` Next, you'll create an inverted index on the "comment" column in order to speed this query up. Note that lowercase comments will be indexed to find terms independent of casing. Run the following commands to create the index: sql ALTER TABLE hackernews ADD INDEX comment_idx(lower(comment)) TYPE inverted; ALTER TABLE hackernews MATERIALIZE INDEX comment_idx; Materialization of the index takes a while (to check if the index was created, use the system table system.data_skipping_indices ). Run the query again once the index has been created: sql title="Query" SELECT count(*) FROM hackernews WHERE hasToken(lower(comment), 'clickhouse'); Notice how the query now took only 0.248 seconds with the index, down from 0.843 seconds previously without it: ```response title="Response" highlight-next-line
{"source_file": "hacker-news.md"}
[ 0.006147409789264202, -0.052903883159160614, -0.10528872907161713, 0.05177394300699234, 0.030218183994293213, 0.06513138860464096, -0.004228841047734022, -0.049266666173934937, -0.034417349845170975, 0.06698734313249588, 0.041683148592710495, -0.07461708039045334, 0.12304273247718811, -0.0...
56dce198-d79a-4f27-b04b-b5c8f8bf3b57
Notice how the query now took only 0.248 seconds with the index, down from 0.843 seconds previously without it: ```response title="Response" highlight-next-line 1 row in set. Elapsed: 0.248 sec. Processed 4.54 million rows, 1.79 GB (18.34 million rows/s., 7.24 GB/s.) ┌─count()─┐ │ 1145 │ └─────────┘ ``` The EXPLAIN clause can be used to understand why the addition of this index improved the query around 3.4x. response text="Query" EXPLAIN indexes = 1 SELECT count(*) FROM hackernews WHERE hasToken(lower(comment), 'clickhouse') response title="Response" ┌─explain─────────────────────────────────────────┐ │ Expression ((Projection + Before ORDER BY)) │ │ Aggregating │ │ Expression (Before GROUP BY) │ │ Filter (WHERE) │ │ ReadFromMergeTree (default.hackernews) │ │ Indexes: │ │ PrimaryKey │ │ Condition: true │ │ Parts: 4/4 │ │ Granules: 3528/3528 │ │ Skip │ │ Name: comment_idx │ │ Description: inverted GRANULARITY 1 │ │ Parts: 4/4 │ │ Granules: 554/3528 │ └─────────────────────────────────────────────────┘ Notice how the index allowed skipping of a substantial number of granules to speed up the query. It's also possible to now efficiently search for one, or all of multiple terms: sql title="Query" SELECT count(*) FROM hackernews WHERE multiSearchAny(lower(comment), ['oltp', 'olap']); response title="Response" ┌─count()─┐ │ 2177 │ └─────────┘ sql title="Query" SELECT count(*) FROM hackernews WHERE hasToken(lower(comment), 'avx') AND hasToken(lower(comment), 'sve'); response ┌─count()─┐ │ 22 │ └─────────┘
{"source_file": "hacker-news.md"}
[ 0.0021336902864277363, -0.010002686642110348, 0.05624351650476456, 0.11599326878786087, 0.07439335435628891, -0.030891967937350273, 0.036291297525167465, -0.007641918957233429, 0.07375528663396835, 0.04836171120405197, 0.027397500351071358, 0.014853413216769695, 0.0797998383641243, -0.0885...
a0280af4-fc00-4e7e-b9b4-0161fc5b11de
description: 'Dataset containing 28+ million Hacker News postings & their vector embeddings' sidebar_label: 'Hacker News vector search dataset' slug: /getting-started/example-datasets/hackernews-vector-search-dataset title: 'Hacker News vector search dataset' keywords: ['semantic search', 'vector similarity', 'approximate nearest neighbours', 'embeddings'] doc_type: 'guide' Introduction {#introduction} The Hacker News dataset contains 28.74 million postings and their vector embeddings. The embeddings were generated using SentenceTransformers model all-MiniLM-L6-v2 . The dimension of each embedding vector is 384 . This dataset can be used to walk through the design, sizing and performance aspects for a large scale, real world vector search application built on top of user generated, textual data. Dataset details {#dataset-details} The complete dataset with vector embeddings is made available by ClickHouse as a single Parquet file in a S3 bucket We recommend users first run a sizing exercise to estimate the storage and memory requirements for this dataset by referring to the documentation . Steps {#steps} Create table {#create-table} Create the hackernews table to store the postings & their embeddings and associated attributes: sql CREATE TABLE hackernews ( `id` Int32, `doc_id` Int32, `text` String, `vector` Array(Float32), `node_info` Tuple( start Nullable(UInt64), end Nullable(UInt64)), `metadata` String, `type` Enum8('story' = 1, 'comment' = 2, 'poll' = 3, 'pollopt' = 4, 'job' = 5), `by` LowCardinality(String), `time` DateTime, `title` String, `post_score` Int32, `dead` UInt8, `deleted` UInt8, `length` UInt32 ) ENGINE = MergeTree ORDER BY id; The id is just an incrementing integer. The additional attributes can be used in predicates to understand vector similarity search combined with post-filtering/pre-filtering as explained in the documentation Load data {#load-table} To load the dataset from the Parquet file, run the following SQL statement: sql INSERT INTO hackernews SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/hackernews-miniLM/hackernews_part_1_of_1.parquet'); Inserting 28.74 million rows into the table will take a few minutes. Build a vector similarity index {#build-vector-similarity-index} Run the following SQL to define and build a vector similarity index on the vector column of the hackernews table: ```sql ALTER TABLE hackernews ADD INDEX vector_index vector TYPE vector_similarity('hnsw', 'cosineDistance', 384, 'bf16', 64, 512); ALTER TABLE hackernews MATERIALIZE INDEX vector_index SETTINGS mutations_sync = 2; ```
{"source_file": "hacker-news-vector-search.md"}
[ -0.00227040471509099, -0.05396965518593788, -0.05364293232560158, 0.013772737234830856, 0.04868926480412483, 0.028796719387173653, -0.06740519404411316, 0.02215750142931938, -0.007612308487296104, -0.006933924742043018, 0.011992019601166248, 0.03564263880252838, 0.12865282595157623, -0.040...
65fffc9c-b377-4dd4-8704-89e1abab3b1a
ALTER TABLE hackernews MATERIALIZE INDEX vector_index SETTINGS mutations_sync = 2; ``` The parameters and performance considerations for index creation and search are described in the documentation . The statement above uses values of 64 and 512 respectively for the HNSW hyperparameters M and ef_construction . Users need to carefully select optimal values for these parameters by evaluating index build time and search results quality corresponding to selected values. Building and saving the index could even take a few minutes/hour for the full 28.74 million dataset, depending on the number of CPU cores available and the storage bandwidth. Perform ANN search {#perform-ann-search} Once the vector similarity index has been built, vector search queries will automatically use the index: ```sql title="Query" SELECT id, title, text FROM hackernews ORDER BY cosineDistance( vector, ) LIMIT 10 ``` The first time load of the vector index into memory could take a few seconds/minutes. Generate embeddings for search query {#generating-embeddings-for-search-query} Sentence Transformers provide local, easy to use embedding models for capturing the semantic meaning of sentences and paragraphs. The dataset in this HackerNews dataset contains vector emebeddings generated from the all-MiniLM-L6-v2 model. An example Python script is provided below to demonstrate how to programmatically generate embedding vectors using sentence_transformers1 Python package. The search embedding vector is then passed as an argument to the [ cosineDistance() ](/sql-reference/functions/distance-functions#cosineDistance) function in the SELECT` query. ```python from sentence_transformers import SentenceTransformer import sys import clickhouse_connect print("Initializing...") model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') chclient = clickhouse_connect.get_client() # ClickHouse credentials here while True: # Take the search query from user print("Enter a search query :") input_query = sys.stdin.readline(); texts = [input_query] # Run the model and obtain search vector print("Generating the embedding for ", input_query); embeddings = model.encode(texts) print("Querying ClickHouse...") params = {'v1':list(embeddings[0]), 'v2':20} result = chclient.query("SELECT id, title, text FROM hackernews ORDER BY cosineDistance(vector, %(v1)s) LIMIT %(v2)s", parameters=params) print("Results :") for row in result.result_rows: print(row[0], row[2][:100]) print("---------") ``` An example of running the above Python script and similarity search results are shown below (only 100 characters from each of the top 20 posts are printed): ```text Initializing... Enter a search query : Are OLAP cubes useful Generating the embedding for "Are OLAP cubes useful" Querying ClickHouse... Results : 27742647 smartmic: slt2021: OLAP Cube is not dead, as long as you use some form of: 1. GROUP BY multiple fi
{"source_file": "hacker-news-vector-search.md"}
[ -0.017440438270568848, 0.019427688792347908, -0.02939150296151638, 0.08178257197141647, 0.007693817839026451, -0.01411439012736082, -0.008710401132702827, -0.02363540232181549, -0.06191650778055191, -0.006792856380343437, -0.016127433627843857, -0.0022286090534180403, 0.07827254384756088, ...
a8d2092c-a2fd-4510-a3bc-98dd62a4d811
Querying ClickHouse... Results : 27742647 smartmic: slt2021: OLAP Cube is not dead, as long as you use some form of: 1. GROUP BY multiple fi 27744260 georgewfraser:A data mart is a logical organization of data to help humans understand the schema. Wh 27761434 mwexler:"We model data according to rigorous frameworks like Kimball or Inmon because we must r 28401230 chotmat: erosenbe0: OLAP database is just a copy, replica, or archive of data with a schema designe 22198879 Merick:+1 for Apache Kylin, it's a great project and awesome open source community. If anyone i 27741776 crazydoggers:I always felt the value of an OLAP cube was uncovering questions you may not know to as 22189480 shadowsun7: _Codemonkeyism: After maintaining an OLAP cube system for some years, I'm not that 27742029 smartmic: gengstrand: My first exposure to OLAP was on a team developing a front end to Essbase that 22364133 irfansharif: simo7: I'm wondering how this technology could work for OLAP cubes. An OLAP cube 23292746 scoresmoke:When I was developing my pet project for Web analytics (<a href="https://github 22198891 js8:It seems that the article makes a categorical error, arguing that OLAP cubes were replaced by co 28421602 chotmat: 7thaccount: Is there any advantage to OLAP cube over plain SQL (large historical database r 22195444 shadowsun7: lkcubing: Thanks for sharing. Interesting write up. While this article accurately capt 22198040 lkcubing:Thanks for sharing. Interesting write up. While this article accurately captures the issu 3973185 stefanu: sgt: Interesting idea. Ofcourse, OLAP isn't just about the underlying cubes and dimensions, 22190903 shadowsun7: js8: It seems that the article makes a categorical error, arguing that OLAP cubes were r 28422241 sradman:OLAP Cubes have been disrupted by Column Stores. Unless you are interested in the history of 28421480 chotmat: sradman: OLAP Cubes have been disrupted by Column Stores. Unless you are interested in the 27742515 BadInformatics: quantified: OP posts with inverted condition: “OLAP != OLAP Cube” is the actual titl 28422935 chotmat: rstuart4133: I remember hearing about OLAP cubes donkey's years ago (probably not far ``` Summarization demo application {#summarization-demo-application} The example above demonstrated semantic search and document retrieval using ClickHouse. A very simple but high potential generative AI example application is presented next. The application performs the following steps: Accepts a topic as input from the user Generates an embedding vector for the topic by using the SentenceTransformers with model all-MiniLM-L6-v2 Retrieves highly relevant posts/comments using vector similarity search on the hackernews table
{"source_file": "hacker-news-vector-search.md"}
[ -0.06909271329641342, -0.04451845586299896, -0.02465173974633217, 0.12750427424907684, 0.05162839964032173, -0.11287996917963028, -0.054015059024095535, 0.01122419536113739, -0.0008807161939330399, -0.010896717198193073, 0.011134332045912743, -0.00980392936617136, 0.03431185707449913, -0.0...
ec2b4d1b-4ab4-4f70-8ce4-7cc415b1e77d
Retrieves highly relevant posts/comments using vector similarity search on the hackernews table Uses LangChain and OpenAI gpt-3.5-turbo Chat API to summarize the content retrieved in step #3. The posts/comments retrieved in step #3 are passed as context to the Chat API and are the key link in Generative AI. An example from running the summarization application is first listed below, followed by the code for the summarization application. Running the application requires an OpenAI API key to be set in the environment variable OPENAI_API_KEY . The OpenAI API key can be obtained after registering at https://platform.openai.com. This application demonstrates a Generative AI use-case that is applicable to multiple enterprise domains like : customer sentiment analysis, technical support automation, mining user conversations, legal documents, medical records, meeting transcripts, financial statements, etc ```shell $ python3 summarize.py Enter a search topic : ClickHouse performance experiences Generating the embedding for ----> ClickHouse performance experiences Querying ClickHouse to retrieve relevant articles... Initializing chatgpt-3.5-turbo model... Summarizing search results retrieved from ClickHouse... Summary from chatgpt-3.5: The discussion focuses on comparing ClickHouse with various databases like TimescaleDB, Apache Spark, AWS Redshift, and QuestDB, highlighting ClickHouse's cost-efficient high performance and suitability for analytical applications. Users praise ClickHouse for its simplicity, speed, and resource efficiency in handling large-scale analytics workloads, although some challenges like DMLs and difficulty in backups are mentioned. ClickHouse is recognized for its real-time aggregate computation capabilities and solid engineering, with comparisons made to other databases like Druid and MemSQL. Overall, ClickHouse is seen as a powerful tool for real-time data processing, analytics, and handling large volumes of data efficiently, gaining popularity for its impressive performance and cost-effectiveness. ``` Code for the above application : ```python print("Initializing...") import sys import json import time from sentence_transformers import SentenceTransformer import clickhouse_connect from langchain.docstore.document import Document from langchain.text_splitter import CharacterTextSplitter from langchain.chat_models import ChatOpenAI from langchain.prompts import PromptTemplate from langchain.chains.summarize import load_summarize_chain import textwrap import tiktoken def num_tokens_from_string(string: str, encoding_name: str) -> int: encoding = tiktoken.encoding_for_model(encoding_name) num_tokens = len(encoding.encode(string)) return num_tokens model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') chclient = clickhouse_connect.get_client(compress=False) # ClickHouse credentials here
{"source_file": "hacker-news-vector-search.md"}
[ -0.0926138237118721, -0.04214294254779816, -0.07580195367336273, 0.04639441892504692, 0.05583744868636131, -0.02706369198858738, 0.017687804996967316, 0.0120767280459404, 0.055444370955228806, -0.0660993829369545, -0.017683910205960274, 0.0002985487808473408, 0.0943959504365921, 0.00981200...
c5953984-ddbc-46c9-b9aa-3216aba79381
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') chclient = clickhouse_connect.get_client(compress=False) # ClickHouse credentials here while True: # Take the search query from user print("Enter a search topic :") input_query = sys.stdin.readline(); texts = [input_query] # Run the model and obtain search or reference vector print("Generating the embedding for ----> ", input_query); embeddings = model.encode(texts) print("Querying ClickHouse...") params = {'v1':list(embeddings[0]), 'v2':100} result = chclient.query("SELECT id,title,text FROM hackernews ORDER BY cosineDistance(vector, %(v1)s) LIMIT %(v2)s", parameters=params) # Just join all the search results doc_results = "" for row in result.result_rows: doc_results = doc_results + "\n" + row[2] print("Initializing chatgpt-3.5-turbo model") model_name = "gpt-3.5-turbo" text_splitter = CharacterTextSplitter.from_tiktoken_encoder( model_name=model_name ) texts = text_splitter.split_text(doc_results) docs = [Document(page_content=t) for t in texts] llm = ChatOpenAI(temperature=0, model_name=model_name) prompt_template = """ Write a concise summary of the following in not more than 10 sentences: {text} CONSCISE SUMMARY : """ prompt = PromptTemplate(template=prompt_template, input_variables=["text"]) num_tokens = num_tokens_from_string(doc_results, model_name) gpt_35_turbo_max_tokens = 4096 verbose = False print("Summarizing search results retrieved from ClickHouse...") if num_tokens <= gpt_35_turbo_max_tokens: chain = load_summarize_chain(llm, chain_type="stuff", prompt=prompt, verbose=verbose) else: chain = load_summarize_chain(llm, chain_type="map_reduce", map_prompt=prompt, combine_prompt=prompt, verbose=verbose) summary = chain.run(docs) print(f"Summary from chatgpt-3.5: {summary}") ```
{"source_file": "hacker-news-vector-search.md"}
[ -0.0028884545899927616, 0.03826155513525009, -0.03859945386648178, 0.07775871455669403, -0.05897403135895729, 0.0857841894030571, -0.016732413321733475, 0.06916852295398712, -0.00887133926153183, -0.08216960728168488, -0.004925994202494621, -0.06039449945092201, 0.15456999838352203, -0.045...
95470bd5-40d8-4e18-ad7d-ecb4971be886
description: 'The Star Schema Benchmark (SSB) data set and queries' sidebar_label: 'Star Schema Benchmark' slug: /getting-started/example-datasets/star-schema title: 'Star Schema Benchmark (SSB, 2009)' doc_type: 'guide' keywords: ['example dataset', 'star schema', 'sample data', 'data modeling', 'benchmark'] The Star Schema Benchmark is roughly based on the TPC-H 's tables and queries but unlike TPC-H, it uses a star schema layout. The bulk of the data sits in a gigantic fact table which is surrounded by multiple small dimension tables. The queries joined the fact table with one or more dimension tables to apply filter criteria, e.g. MONTH = 'JANUARY' . References: - Star Schema Benchmark (O'Neil et. al), 2009 - Variations of the Star Schema Benchmark to Test the Effects of Data Skew on Query Performance (Rabl. et. al.), 2013 First, checkout the star schema benchmark repository and compile the data generator: bash git clone https://github.com/vadimtk/ssb-dbgen.git cd ssb-dbgen make Then, generate the data. Parameter -s specifies the scale factor. For example, with -s 100 , 600 million rows are generated. bash ./dbgen -s 1000 -T c ./dbgen -s 1000 -T l ./dbgen -s 1000 -T p ./dbgen -s 1000 -T s ./dbgen -s 1000 -T d Now create tables in ClickHouse: ```sql CREATE TABLE customer ( C_CUSTKEY UInt32, C_NAME String, C_ADDRESS String, C_CITY LowCardinality(String), C_NATION LowCardinality(String), C_REGION LowCardinality(String), C_PHONE String, C_MKTSEGMENT LowCardinality(String) ) ENGINE = MergeTree ORDER BY (C_CUSTKEY); CREATE TABLE lineorder ( LO_ORDERKEY UInt32, LO_LINENUMBER UInt8, LO_CUSTKEY UInt32, LO_PARTKEY UInt32, LO_SUPPKEY UInt32, LO_ORDERDATE Date, LO_ORDERPRIORITY LowCardinality(String), LO_SHIPPRIORITY UInt8, LO_QUANTITY UInt8, LO_EXTENDEDPRICE UInt32, LO_ORDTOTALPRICE UInt32, LO_DISCOUNT UInt8, LO_REVENUE UInt32, LO_SUPPLYCOST UInt32, LO_TAX UInt8, LO_COMMITDATE Date, LO_SHIPMODE LowCardinality(String) ) ENGINE = MergeTree PARTITION BY toYear(LO_ORDERDATE) ORDER BY (LO_ORDERDATE, LO_ORDERKEY); CREATE TABLE part ( P_PARTKEY UInt32, P_NAME String, P_MFGR LowCardinality(String), P_CATEGORY LowCardinality(String), P_BRAND LowCardinality(String), P_COLOR LowCardinality(String), P_TYPE LowCardinality(String), P_SIZE UInt8, P_CONTAINER LowCardinality(String) ) ENGINE = MergeTree ORDER BY P_PARTKEY;
{"source_file": "star-schema.md"}
[ 0.0030315823387354612, 0.020285600796341896, -0.02582036517560482, 0.06305564194917679, -0.001341316499747336, -0.034200266003608704, -0.03549743443727493, -0.013636743649840355, -0.03020283579826355, 0.011490917764604092, -0.020713090896606445, -0.04158184677362442, 0.026152439415454865, ...
a3e31ea8-5b0a-4f1d-b436-fb8a10cf68e7
CREATE TABLE supplier ( S_SUPPKEY UInt32, S_NAME String, S_ADDRESS String, S_CITY LowCardinality(String), S_NATION LowCardinality(String), S_REGION LowCardinality(String), S_PHONE String ) ENGINE = MergeTree ORDER BY S_SUPPKEY; CREATE TABLE date ( D_DATEKEY Date, D_DATE FixedString(18), D_DAYOFWEEK LowCardinality(String), D_MONTH LowCardinality(String), D_YEAR UInt16, D_YEARMONTHNUM UInt32, D_YEARMONTH LowCardinality(FixedString(7)), D_DAYNUMINWEEK UInt8, D_DAYNUMINMONTH UInt8, D_DAYNUMINYEAR UInt16, D_MONTHNUMINYEAR UInt8, D_WEEKNUMINYEAR UInt8, D_SELLINGSEASON String, D_LASTDAYINWEEKFL UInt8, D_LASTDAYINMONTHFL UInt8, D_HOLIDAYFL UInt8, D_WEEKDAYFL UInt8 ) ENGINE = MergeTree ORDER BY D_DATEKEY; ``` The data can be imported as follows: bash clickhouse-client --query "INSERT INTO customer FORMAT CSV" < customer.tbl clickhouse-client --query "INSERT INTO part FORMAT CSV" < part.tbl clickhouse-client --query "INSERT INTO supplier FORMAT CSV" < supplier.tbl clickhouse-client --query "INSERT INTO lineorder FORMAT CSV" < lineorder.tbl clickhouse-client --query "INSERT INTO date FORMAT CSV" < date.tbl In many use cases of ClickHouse, multiple tables are converted into a single denormalized flat table. This step is optional, below queries are listed in their original form and in a format rewritten for the denormalized table. ```sql SET max_memory_usage = 20000000000;
{"source_file": "star-schema.md"}
[ 0.05243537202477455, 0.055744726210832596, 0.07115582376718521, -0.0009759132517501712, -0.12934458255767822, 0.0064004091545939445, -0.04628723859786987, 0.04332845285534859, -0.07335396856069565, 0.023184843361377716, 0.09927255660295486, -0.05753237009048462, -0.0013970390427857637, -0....
bd8ecec4-be44-440d-9823-100134a481f3
```sql SET max_memory_usage = 20000000000; CREATE TABLE lineorder_flat ENGINE = MergeTree ORDER BY (LO_ORDERDATE, LO_ORDERKEY) AS SELECT l.LO_ORDERKEY AS LO_ORDERKEY, l.LO_LINENUMBER AS LO_LINENUMBER, l.LO_CUSTKEY AS LO_CUSTKEY, l.LO_PARTKEY AS LO_PARTKEY, l.LO_SUPPKEY AS LO_SUPPKEY, l.LO_ORDERDATE AS LO_ORDERDATE, l.LO_ORDERPRIORITY AS LO_ORDERPRIORITY, l.LO_SHIPPRIORITY AS LO_SHIPPRIORITY, l.LO_QUANTITY AS LO_QUANTITY, l.LO_EXTENDEDPRICE AS LO_EXTENDEDPRICE, l.LO_ORDTOTALPRICE AS LO_ORDTOTALPRICE, l.LO_DISCOUNT AS LO_DISCOUNT, l.LO_REVENUE AS LO_REVENUE, l.LO_SUPPLYCOST AS LO_SUPPLYCOST, l.LO_TAX AS LO_TAX, l.LO_COMMITDATE AS LO_COMMITDATE, l.LO_SHIPMODE AS LO_SHIPMODE, c.C_NAME AS C_NAME, c.C_ADDRESS AS C_ADDRESS, c.C_CITY AS C_CITY, c.C_NATION AS C_NATION, c.C_REGION AS C_REGION, c.C_PHONE AS C_PHONE, c.C_MKTSEGMENT AS C_MKTSEGMENT, s.S_NAME AS S_NAME, s.S_ADDRESS AS S_ADDRESS, s.S_CITY AS S_CITY, s.S_NATION AS S_NATION, s.S_REGION AS S_REGION, s.S_PHONE AS S_PHONE, p.P_NAME AS P_NAME, p.P_MFGR AS P_MFGR, p.P_CATEGORY AS P_CATEGORY, p.P_BRAND AS P_BRAND, p.P_COLOR AS P_COLOR, p.P_TYPE AS P_TYPE, p.P_SIZE AS P_SIZE, p.P_CONTAINER AS P_CONTAINER FROM lineorder AS l INNER JOIN customer AS c ON c.C_CUSTKEY = l.LO_CUSTKEY INNER JOIN supplier AS s ON s.S_SUPPKEY = l.LO_SUPPKEY INNER JOIN part AS p ON p.P_PARTKEY = l.LO_PARTKEY; ``` The queries are generated by ./qgen -s <scaling_factor> . Example queries for s = 100 : Q1.1 sql SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS REVENUE FROM lineorder, date WHERE LO_ORDERDATE = D_DATEKEY AND D_YEAR = 1993 AND LO_DISCOUNT BETWEEN 1 AND 3 AND LO_QUANTITY < 25; Denormalized table: sql SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue FROM lineorder_flat WHERE toYear(LO_ORDERDATE) = 1993 AND LO_DISCOUNT BETWEEN 1 AND 3 AND LO_QUANTITY < 25; Q1.2 sql SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS REVENUE FROM lineorder, date WHERE LO_ORDERDATE = D_DATEKEY AND D_YEARMONTHNUM = 199401 AND LO_DISCOUNT BETWEEN 4 AND 6 AND LO_QUANTITY BETWEEN 26 AND 35; Denormalized table: sql SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue FROM lineorder_flat WHERE toYYYYMM(LO_ORDERDATE) = 199401 AND LO_DISCOUNT BETWEEN 4 AND 6 AND LO_QUANTITY BETWEEN 26 AND 35; Q1.3 sql SELECT sum(LO_EXTENDEDPRICE*LO_DISCOUNT) AS REVENUE FROM lineorder, date WHERE LO_ORDERDATE = D_DATEKEY AND D_WEEKNUMINYEAR = 6 AND D_YEAR = 1994 AND LO_DISCOUNT BETWEEN 5 AND 7 AND LO_QUANTITY BETWEEN 26 AND 35; Denormalized table:
{"source_file": "star-schema.md"}
[ 0.04298517480492592, 0.0008757627801969647, -0.005644753109663725, 0.06070650741457939, -0.1284649670124054, 0.006160304881632328, -0.02289152704179287, 0.09536637365818024, -0.06481987982988358, 0.07114927470684052, 0.07136894017457962, -0.017921168357133865, 0.029746482148766518, -0.0884...
3d1897f8-4140-4612-9efb-075976090d7f
Denormalized table: sql SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue FROM lineorder_flat WHERE toISOWeek(LO_ORDERDATE) = 6 AND toYear(LO_ORDERDATE) = 1994 AND LO_DISCOUNT BETWEEN 5 AND 7 AND LO_QUANTITY BETWEEN 26 AND 35; Q2.1 sql SELECT sum(LO_REVENUE), D_YEAR, P_BRAND FROM lineorder, date, part, supplier WHERE LO_ORDERDATE = D_DATEKEY AND LO_PARTKEY = P_PARTKEY AND LO_SUPPKEY = S_SUPPKEY AND P_CATEGORY = 'MFGR#12' AND S_REGION = 'AMERICA' GROUP BY D_YEAR, P_BRAND ORDER BY D_YEAR, P_BRAND; Denormalized table: sql SELECT sum(LO_REVENUE), toYear(LO_ORDERDATE) AS year, P_BRAND FROM lineorder_flat WHERE P_CATEGORY = 'MFGR#12' AND S_REGION = 'AMERICA' GROUP BY year, P_BRAND ORDER BY year, P_BRAND; Q2.2 sql SELECT sum(LO_REVENUE), D_YEAR, P_BRAND FROM lineorder, date, part, supplier WHERE LO_ORDERDATE = D_DATEKEY AND LO_PARTKEY = P_PARTKEY AND LO_SUPPKEY = S_SUPPKEY AND P_BRAND BETWEEN 'MFGR#2221' AND 'MFGR#2228' AND S_REGION = 'ASIA' GROUP BY D_YEAR, P_BRAND ORDER BY D_YEAR, P_BRAND; Denormalized table: sql SELECT sum(LO_REVENUE), toYear(LO_ORDERDATE) AS year, P_BRAND FROM lineorder_flat WHERE P_BRAND >= 'MFGR#2221' AND P_BRAND <= 'MFGR#2228' AND S_REGION = 'ASIA' GROUP BY year, P_BRAND ORDER BY year, P_BRAND; Q2.3 sql SELECT sum(LO_REVENUE), D_YEAR, P_BRAND FROM lineorder, date, part, supplier WHERE LO_ORDERDATE = D_DATEKEY AND LO_PARTKEY = P_PARTKEY AND LO_SUPPKEY = S_SUPPKEY AND P_BRAND = 'MFGR#2221' AND S_REGION = 'EUROPE' GROUP BY D_YEAR, P_BRAND ORDER BY D_YEAR, P_BRAND; Denormalized table: sql SELECT sum(LO_REVENUE), toYear(LO_ORDERDATE) AS year, P_BRAND FROM lineorder_flat WHERE P_BRAND = 'MFGR#2239' AND S_REGION = 'EUROPE' GROUP BY year, P_BRAND ORDER BY year, P_BRAND; Q3.1 sql SELECT C_NATION, S_NATION, D_YEAR, sum(LO_REVENUE) AS REVENUE FROM customer, lineorder, supplier, date WHERE LO_CUSTKEY = C_CUSTKEY AND LO_SUPPKEY = S_SUPPKEY AND LO_ORDERDATE = D_DATEKEY AND C_REGION = 'ASIA' AND S_REGION = 'ASIA' AND D_YEAR >= 1992 AND D_YEAR <= 1997 GROUP BY C_NATION, S_NATION, D_YEAR ORDER BY D_YEAR ASC, REVENUE DESC; Denormalized table: sql SELECT C_NATION, S_NATION, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE C_REGION = 'ASIA' AND S_REGION = 'ASIA' AND year >= 1992 AND year <= 1997 GROUP BY C_NATION, S_NATION, year ORDER BY year ASC, revenue DESC; Q3.2
{"source_file": "star-schema.md"}
[ 0.06390402466058731, -0.007998386397957802, 0.06808371841907501, 0.021528882905840874, -0.082834891974926, 0.05038246139883995, -0.008927464485168457, 0.04671308025717735, -0.056201547384262085, -0.021999968215823174, 0.05350204184651375, -0.057462453842163086, -0.0200493261218071, -0.0586...
4d309921-e06d-481f-8827-4638b96403fc
Q3.2 sql SELECT C_CITY, S_CITY, D_YEAR, sum(LO_REVENUE) AS REVENUE FROM customer, lineorder, supplier, date WHERE LO_CUSTKEY = C_CUSTKEY AND LO_SUPPKEY = S_SUPPKEY AND LO_ORDERDATE = D_DATEKEY AND C_NATION = 'UNITED STATES' AND S_NATION = 'UNITED STATES' AND D_YEAR >= 1992 AND D_YEAR <= 1997 GROUP BY C_CITY, S_CITY, D_YEAR ORDER BY D_YEAR ASC, REVENUE DESC; Denormalized table: sql SELECT C_CITY, S_CITY, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE C_NATION = 'UNITED STATES' AND S_NATION = 'UNITED STATES' AND year >= 1992 AND year <= 1997 GROUP BY C_CITY, S_CITY, year ORDER BY year ASC, revenue DESC; Q3.3 sql SELECT C_CITY, S_CITY, D_YEAR, sum(LO_REVENUE) AS revenue FROM customer, lineorder, supplier, date WHERE LO_CUSTKEY = C_CUSTKEY AND LO_SUPPKEY = S_SUPPKEY AND LO_ORDERDATE = D_DATEKEY AND (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND D_YEAR >= 1992 AND D_YEAR <= 1997 GROUP BY C_CITY, S_CITY, D_YEAR ORDER BY D_YEAR ASC, revenue DESC; Denormalized table: sql SELECT C_CITY, S_CITY, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND year >= 1992 AND year <= 1997 GROUP BY C_CITY, S_CITY, year ORDER BY year ASC, revenue DESC; Q3.4 sql SELECT C_CITY, S_CITY, D_YEAR, sum(LO_REVENUE) AS revenue FROM customer, lineorder, supplier, date WHERE LO_CUSTKEY = C_CUSTKEY AND LO_SUPPKEY = S_SUPPKEY AND LO_ORDERDATE = D_DATEKEY AND (C_CITY='UNITED KI1' OR C_CITY='UNITED KI5') AND (S_CITY='UNITED KI1' OR S_CITY='UNITED KI5') AND D_YEARMONTH = 'Dec1997' GROUP BY C_CITY, S_CITY, D_YEAR ORDER BY D_YEAR ASC, revenue DESC; Denormalized table: sql SELECT C_CITY, S_CITY, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND toYYYYMM(LO_ORDERDATE) = 199712 GROUP BY C_CITY, S_CITY, year ORDER BY year ASC, revenue DESC; Q4.1 sql SELECT D_YEAR, C_NATION, sum(LO_REVENUE - LO_SUPPLYCOST) AS PROFIT FROM date, customer, supplier, part, lineorder WHERE LO_CUSTKEY = C_CUSTKEY AND LO_SUPPKEY = S_SUPPKEY AND LO_PARTKEY = P_PARTKEY AND LO_ORDERDATE = D_DATEKEY AND C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') GROUP BY D_YEAR, C_NATION ORDER BY D_YEAR, C_NATION Denormalized table:
{"source_file": "star-schema.md"}
[ 0.04196598753333092, -0.029587654396891594, 0.04394057020545006, 0.03567129373550415, -0.1196785420179367, 0.04046822339296341, -0.003388779703527689, -0.02040752023458481, -0.05340857058763504, -0.02275337651371956, 0.06344049423933029, -0.10770635306835175, 0.017160631716251373, -0.01963...
4e37fd5c-c2ae-4038-873a-6d21332075ec
Denormalized table: sql SELECT toYear(LO_ORDERDATE) AS year, C_NATION, sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') GROUP BY year, C_NATION ORDER BY year ASC, C_NATION ASC; Q4.2 sql SELECT D_YEAR, S_NATION, P_CATEGORY, sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM date, customer, supplier, part, lineorder WHERE LO_CUSTKEY = C_CUSTKEY AND LO_SUPPKEY = S_SUPPKEY AND LO_PARTKEY = P_PARTKEY AND LO_ORDERDATE = D_DATEKEY AND C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (D_YEAR = 1997 OR D_YEAR = 1998) AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') GROUP BY D_YEAR, S_NATION, P_CATEGORY ORDER BY D_YEAR, S_NATION, P_CATEGORY Denormalized table: sql SELECT toYear(LO_ORDERDATE) AS year, S_NATION, P_CATEGORY, sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (year = 1997 OR year = 1998) AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') GROUP BY year, S_NATION, P_CATEGORY ORDER BY year ASC, S_NATION ASC, P_CATEGORY ASC; Q4.3 sql SELECT D_YEAR, S_CITY, P_BRAND, sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM date, customer, supplier, part, lineorder WHERE LO_CUSTKEY = C_CUSTKEY AND LO_SUPPKEY = S_SUPPKEY AND LO_PARTKEY = P_PARTKEY AND LO_ORDERDATE = D_DATEKEY AND C_REGION = 'AMERICA' AND S_NATION = 'UNITED STATES' AND (D_YEAR = 1997 OR D_YEAR = 1998) AND P_CATEGORY = 'MFGR#14' GROUP BY D_YEAR, S_CITY, P_BRAND ORDER BY D_YEAR, S_CITY, P_BRAND Denormalized table: sql SELECT toYear(LO_ORDERDATE) AS year, S_CITY, P_BRAND, sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE S_NATION = 'UNITED STATES' AND (year = 1997 OR year = 1998) AND P_CATEGORY = 'MFGR#14' GROUP BY year, S_CITY, P_BRAND ORDER BY year ASC, S_CITY ASC, P_BRAND ASC;
{"source_file": "star-schema.md"}
[ 0.07326330989599228, -0.004423067905008793, 0.04091827571392059, 0.054736293852329254, -0.11026713252067566, 0.04297230392694473, -0.02085988037288189, 0.001802070764824748, -0.045621227473020554, -0.05326760932803154, 0.0217802245169878, -0.09721684455871582, -0.004988525994122028, -0.008...
e1838278-61dd-4f37-9a13-3ed56d3983b4
description: 'Learn how to use projections to improve the performance of queries that you run frequently using the UK property dataset, which contains data about prices paid for real-estate property in England and Wales' sidebar_label: 'UK property prices' slug: /getting-started/example-datasets/uk-price-paid title: 'The UK property prices dataset' doc_type: 'guide' keywords: ['example dataset', 'uk property', 'sample data', 'real estate', 'getting started'] This data contains prices paid for real-estate property in England and Wales. The data is available since 1995, and the size of the dataset in uncompressed form is about 4 GiB (which will only take about 278 MiB in ClickHouse). Source: https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads Description of the fields: https://www.gov.uk/guidance/about-the-price-paid-data Contains HM Land Registry data © Crown copyright and database right 2021. This data is licensed under the Open Government Licence v3.0. Create the table {#create-table} ```sql CREATE DATABASE uk; CREATE TABLE uk.uk_price_paid ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4, 'other' = 0), is_new UInt8, duration Enum8('freehold' = 1, 'leasehold' = 2, 'unknown' = 0), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2); ``` Preprocess and insert the data {#preprocess-import-data} We will use the url function to stream the data into ClickHouse. We need to preprocess some of the incoming data first, which includes: - splitting the postcode to two different columns - postcode1 and postcode2 , which is better for storage and queries - converting the time field to date as it only contains 00:00 time - ignoring the UUid field because we don't need it for analysis - transforming type and duration to more readable Enum fields using the transform function - transforming the is_new field from a single-character string ( Y / N ) to a UInt8 field with 0 or 1 - drop the last two columns since they all have the same value (which is 0) The url function streams the data from the web server into your ClickHouse table. The following command inserts 5 million rows into the uk_price_paid table:
{"source_file": "uk-price-paid.md"}
[ 0.03313375636935234, -0.026375018060207367, -0.03400157019495964, 0.06635996699333191, -0.041780032217502594, -0.01761162467300892, -0.05414887145161629, -0.02201998606324196, -0.05044969171285629, 0.06831929087638855, -0.018475381657481194, -0.04966690391302109, 0.0467977337539196, -0.020...
a54e50d8-5bce-4cbe-8b18-50472707761c
The url function streams the data from the web server into your ClickHouse table. The following command inserts 5 million rows into the uk_price_paid table: sql INSERT INTO uk.uk_price_paid SELECT toUInt32(price_string) AS price, parseDateTimeBestEffortUS(time) AS date, splitByChar(' ', postcode)[1] AS postcode1, splitByChar(' ', postcode)[2] AS postcode2, transform(a, ['T', 'S', 'D', 'F', 'O'], ['terraced', 'semi-detached', 'detached', 'flat', 'other']) AS type, b = 'Y' AS is_new, transform(c, ['F', 'L', 'U'], ['freehold', 'leasehold', 'unknown']) AS duration, addr1, addr2, street, locality, town, district, county FROM url( 'http://prod1.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv', 'CSV', 'uuid_string String, price_string String, time String, postcode String, a String, b String, c String, addr1 String, addr2 String, street String, locality String, town String, district String, county String, d String, e String' ) SETTINGS max_http_get_redirects=10; Wait for the data to insert - it will take a minute or two depending on the network speed. Validate the data {#validate-data} Let's verify it worked by seeing how many rows were inserted: sql runnable SELECT count() FROM uk.uk_price_paid At the time this query was run, the dataset had 27,450,499 rows. Let's see what the storage size is of the table in ClickHouse: sql runnable SELECT formatReadableSize(total_bytes) FROM system.tables WHERE name = 'uk_price_paid' Notice the size of the table is just 221.43 MiB! Run some queries {#run-queries} Let's run some queries to analyze the data: Query 1. Average price per year {#average-price} sql runnable SELECT toYear(date) AS year, round(avg(price)) AS price, bar(price, 0, 1000000, 80 ) FROM uk.uk_price_paid GROUP BY year ORDER BY year Query 2. average price per year in London {#average-price-london} sql runnable SELECT toYear(date) AS year, round(avg(price)) AS price, bar(price, 0, 2000000, 100 ) FROM uk.uk_price_paid WHERE town = 'LONDON' GROUP BY year ORDER BY year Something happened to home prices in 2020! But that is probably not a surprise... Query 3. The most expensive neighborhoods {#most-expensive-neighborhoods} sql runnable SELECT town, district, count() AS c, round(avg(price)) AS price, bar(price, 0, 5000000, 100) FROM uk.uk_price_paid WHERE date >= '2020-01-01' GROUP BY town, district HAVING c >= 100 ORDER BY price DESC LIMIT 100 Speeding up queries with projections {#speeding-up-queries-with-projections} We can speed up these queries with projections. See "Projections" for examples with this dataset. Test it in the playground {#playground} The dataset is also available in the Online Playground .
{"source_file": "uk-price-paid.md"}
[ 0.03851724788546562, -0.06705735623836517, -0.0322934091091156, 0.04370522126555443, -0.05778971686959267, -0.09267838299274445, -0.037414681166410446, -0.06488794088363647, -0.09811774641275406, 0.04056483134627342, 0.04978606104850769, -0.1021038368344307, 0.04313395544886589, -0.0974284...
b3c3e12e-dbcd-407f-b8f3-935d4b21f175
description: 'Dataset containing 1 million articles from Wikipedia and their vector embeddings' sidebar_label: 'dbpedia dataset' slug: /getting-started/example-datasets/dbpedia-dataset title: 'dbpedia dataset' keywords: ['semantic search', 'vector similarity', 'approximate nearest neighbours', 'embeddings'] doc_type: 'guide' The dbpedia dataset contains 1 million articles from Wikipedia and their vector embeddings generated using the text-embedding-3-large model from OpenAI. The dataset is an excellent starter dataset to understand vector embeddings, vector similarity search and Generative AI. We use this dataset to demonstrate approximate nearest neighbor search in ClickHouse and a simple but powerful Q&A application. Dataset details {#dataset-details} The dataset contains 26 Parquet files located on huggingface.co . The files are named 0.parquet , 1.parquet , ..., 25.parquet . To view some example rows of the dataset, please visit this Hugging Face page . Create table {#create-table} Create the dbpedia table to store the article id, title, text and embedding vector: ```sql CREATE TABLE dbpedia ( id String, title String, text String, vector Array(Float32) CODEC(NONE) ) ENGINE = MergeTree ORDER BY (id); ``` Load table {#load-table} To load the dataset from all Parquet files, run the following shell command: shell $ seq 0 25 | xargs -P1 -I{} clickhouse client -q "INSERT INTO dbpedia SELECT _id, title, text, \"text-embedding-3-large-1536-embedding\" FROM url('https://huggingface.co/api/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M/parquet/default/train/{}.parquet') SETTINGS max_http_get_redirects=5,enable_url_encoding=0;" Alternatively, individual SQL statements can be run as shown below to load each of the 25 Parquet files: ```sql INSERT INTO dbpedia SELECT _id, title, text, "text-embedding-3-large-1536-embedding" FROM url('https://huggingface.co/api/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M/parquet/default/train/0.parquet') SETTINGS max_http_get_redirects=5,enable_url_encoding=0; INSERT INTO dbpedia SELECT _id, title, text, "text-embedding-3-large-1536-embedding" FROM url('https://huggingface.co/api/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M/parquet/default/train/1.parquet') SETTINGS max_http_get_redirects=5,enable_url_encoding=0; ... INSERT INTO dbpedia SELECT _id, title, text, "text-embedding-3-large-1536-embedding" FROM url('https://huggingface.co/api/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M/parquet/default/train/25.parquet') SETTINGS max_http_get_redirects=5,enable_url_encoding=0; ``` Verify that 1 million rows are seen in the dbpedia table: ```sql SELECT count(*) FROM dbpedia ┌─count()─┐ 1. │ 1000000 │ └─────────┘ ``` Semantic search {#semantic-search} Recommended reading: "Vector embeddings " OpenAPI guide
{"source_file": "dbpedia.md"}
[ -0.04481258988380432, -0.07038255035877228, -0.032942675054073334, -0.009305226616561413, 0.0493253618478775, 0.030111370608210564, -0.02584431692957878, -0.009131822735071182, -0.004781630355864763, -0.009640083648264408, 0.02234290912747383, -0.005433323327451944, 0.1262395828962326, 0.0...
2ece3677-1441-4ef4-a0fb-048eeab59300
```sql SELECT count(*) FROM dbpedia ┌─count()─┐ 1. │ 1000000 │ └─────────┘ ``` Semantic search {#semantic-search} Recommended reading: "Vector embeddings " OpenAPI guide Semantic search (also referred to as similarity search ) using vector embeddings involves the following steps: Accept a search query from a user in natural language e.g "Tell me about some scenic rail journeys” , “Suspense novels set in Europe” etc Generate embedding vector for the search query using the LLM model Find nearest neighbours to the search embedding vector in the dataset The nearest neighbours are documents, images or content that are results relevant to the user query. The retrieved results are the key input to Retrieval Augmented Generation (RAG) in Generative AI applications. Run a brute-force vector similarity search {#run-a-brute-force-vector-similarity-search} KNN (k - Nearest Neighbours) search or brute force search involves calculating the distance of each vector in the dataset to the search embedding vector and then ordering the distances to get the nearest neighbours. With the dbpedia dataset, a quick technique to visually observe semantic search is to use embedding vectors from the dataset itself as search vectors. For example: sql title="Query" SELECT id, title FROM dbpedia ORDER BY cosineDistance(vector, ( SELECT vector FROM dbpedia WHERE id = '<dbpedia:The_Remains_of_the_Day>') ) ASC LIMIT 20 ```response title="Response" ┌─id────────────────────────────────────────┬─title───────────────────────────┐ 1. │ │ The Remains of the Day │ 2. │ │ The Remains of the Day (film) │ 3. │ │ Never Let Me Go (novel) │ 4. │ │ Last Orders │ 5. │ │ The Unconsoled │ 6. │ │ The Hours (novel) │ 7. │ │ An Artist of the Floating World │ 8. │ │ Heat and Dust │ 9. │ │ A Pale View of Hills │ 10. │ │ Howards End (film) │ 11. │ │ When We Were Orphans │ 12. │ │ A Passage to India (film) │ 13. │ │ Memoirs of a Survivor │ 14. │ │ The Child in Time │ 15. │ │ The Sea, the Sea │ 16. │ │ The Master (novel) │ 17. │ │ The Memorial │ 18. │ │ The Hours (film) │ 19. │ │ Human Remains (film) │ 20. │ │ Kazuo Ishiguro │ └───────────────────────────────────────────┴─────────────────────────────────┘ highlight-next-line 20 rows in set. Elapsed: 0.261 sec. Processed 1.00 million rows, 6.22 GB (3.84 million rows/s., 23.81 GB/s.) ```
{"source_file": "dbpedia.md"}
[ 0.05397402122616768, -0.0824027881026268, -0.01701466366648674, 0.0479494072496891, 0.0592338889837265, 0.024836966767907143, 0.016025466844439507, 0.0030953409150242805, 0.0035412649158388376, -0.033456020057201385, -0.00925415102392435, -0.06747353076934814, 0.13025324046611786, -0.00604...
439a3f15-6a2d-4288-b761-9dfe9d98829b
highlight-next-line 20 rows in set. Elapsed: 0.261 sec. Processed 1.00 million rows, 6.22 GB (3.84 million rows/s., 23.81 GB/s.) ``` Note down the query latency so that we can compare it with the query latency of ANN (using vector index). Also record the query latency with cold OS file cache and with max_threads=1 to recognize the real compute usage and storage bandwidth usage (extrapolate it to a production dataset with millions of vectors!) Build a vector similarity index {#build-vector-similarity-index} Run the following SQL to define and build a vector similarity index on the vector column: ```sql ALTER TABLE dbpedia ADD INDEX vector_index vector TYPE vector_similarity('hnsw', 'cosineDistance', 1536, 'bf16', 64, 512); ALTER TABLE dbpedia MATERIALIZE INDEX vector_index SETTINGS mutations_sync = 2; ``` The parameters and performance considerations for index creation and search are described in the documentation . Building and saving the index could take a few minutes depending on number of CPU cores available and the storage bandwidth. Perform ANN search {#perform-ann-search} Approximate Nearest Neighbours or ANN refers to group of techniques (e.g., special data structures like graphs and random forests) which compute results much faster than exact vector search. The result accuracy is typically "good enough" for practical use. Many approximate techniques provide parameters to tune the trade-off between the result accuracy and the search time. Once the vector similarity index has been built, vector search queries will automatically use the index: sql title="Query" SELECT id, title FROM dbpedia ORDER BY cosineDistance(vector, ( SELECT vector FROM dbpedia WHERE id = '<dbpedia:Glacier_Express>' )) ASC LIMIT 20
{"source_file": "dbpedia.md"}
[ -0.0034374259412288666, -0.016843967139720917, -0.1114412471652031, 0.016297701746225357, 0.023446187376976013, -0.06073162332177162, -0.02051026187837124, -0.030245138332247734, -0.07047399133443832, 0.0019444625359028578, -0.015327638015151024, -0.02816247008740902, 0.014079802669584751, ...
76b696ee-6490-4804-b10b-17102c824c4e
```response title="Response" ┌─id──────────────────────────────────────────────┬─title─────────────────────────────────┐ 1. │ │ Glacier Express │ 2. │ │ BVZ Zermatt-Bahn │ 3. │ │ Gornergrat railway │ 4. │ │ RegioExpress │ 5. │ │ Matterhorn Gotthard Bahn │ 6. │ │ Rhaetian Railway │ 7. │ │ Gotthard railway │ 8. │ │ Furka–Oberalp railway │ 9. │ │ Jungfrau railway │ 10. │ │ Monte Generoso railway │ 11. │ │ Montreux–Oberland Bernois railway │ 12. │ │ Brienz–Rothorn railway │ 13. │ │ Lauterbrunnen–Mürren mountain railway │ 14. │ │ Luzern–Stans–Engelberg railway line │ 15. │ │ Rigi Railways │ 16. │ │ Saint-Gervais–Vallorcine railway │ 17. │ │ Gatwick Express │ 18. │ │ Brünig railway line │ 19. │ │ Regional-Express │ 20. │ │ Schynige Platte railway │ └─────────────────────────────────────────────────┴───────────────────────────────────────┘ highlight-next-line 20 rows in set. Elapsed: 0.025 sec. Processed 32.03 thousand rows, 2.10 MB (1.29 million rows/s., 84.80 MB/s.) ``` Generating embeddings for search query {#generating-embeddings-for-search-query} The similarity search queries seen until now use one of the existing vectors in the dbpedia table as the search vector. In real world applications, the search vector has to be generated for a user input query which could be in natural language. The search vector should be generated by using the same LLM model used to generate embedding vectors for the dataset. An example Python script is listed below to demonstrate how to programmatically call OpenAI API's to generate embedding vectors using the text-embedding-3-large model. The search embedding vector is then passed as an argument to the cosineDistance() function in the SELECT query. Running the script requires an OpenAI API key to be set in the environment variable OPENAI_API_KEY . The OpenAI API key can be obtained after registering at https://platform.openai.com. ```python import sys from openai import OpenAI import clickhouse_connect ch_client = clickhouse_connect.get_client(compress=False) # Pass ClickHouse credentials openai_client = OpenAI() # Set OPENAI_API_KEY environment variable
{"source_file": "dbpedia.md"}
[ -0.06886451691389084, -0.04823829606175423, 0.03815218061208725, 0.06709544360637665, 0.02668391726911068, 0.0010991113958880305, -0.05806450545787811, 0.049915678799152374, -0.07311611622571945, -0.020300421863794327, -0.016343912109732628, -0.06552651524543762, -0.007815846242010593, -0....
bafa50ce-5e9b-44e6-8e1a-1c756e928c0c
ch_client = clickhouse_connect.get_client(compress=False) # Pass ClickHouse credentials openai_client = OpenAI() # Set OPENAI_API_KEY environment variable def get_embedding(text, model): text = text.replace("\n", " ") return openai_client.embeddings.create(input = [text], model=model, dimensions=1536).data[0].embedding while True: # Accept the search query from user print("Enter a search query :") input_query = sys.stdin.readline(); # Call OpenAI API endpoint to get the embedding print("Generating the embedding for ", input_query); embedding = get_embedding(input_query, model='text-embedding-3-large') # Execute vector search query in ClickHouse print("Querying clickhouse...") params = {'v1':embedding, 'v2':10} result = ch_client.query("SELECT id,title,text FROM dbpedia ORDER BY cosineDistance(vector, %(v1)s) LIMIT %(v2)s", parameters=params) for row in result.result_rows: print(row[0], row[1], row[2]) print("---------------") ``` Q&A demo application {#q-and-a-demo-application} The examples above demonstrated semantic search and document retrieval using ClickHouse. A very simple but high potential generative AI example application is presented next. The application performs the following steps: Accepts a topic as input from the user Generates an embedding vector for the topic by invoking OpenAI API with model text-embedding-3-large Retrieves highly relevant Wikipedia articles/documents using vector similarity search on the dbpedia table Accepts a free-form question in natural language from the user relating to the topic Uses the OpenAI gpt-3.5-turbo Chat API to answer the question based on the knowledge in the documents retrieved in step #3. The documents retrieved in step #3 are passed as context to the Chat API and are the key link in Generative AI. A couple of conversation examples by running the Q&A application are first listed below, followed by the code for the Q&A application. Running the application requires an OpenAI API key to be set in the environment variable OPENAI_API_KEY . The OpenAI API key can be obtained after registering at https://platform.openai.com. ```shell $ python3 QandA.py Enter a topic : FIFA world cup 1990 Generating the embedding for 'FIFA world cup 1990' and collecting 100 articles related to it from ClickHouse... Enter your question : Who won the golden boot Salvatore Schillaci of Italy won the Golden Boot at the 1990 FIFA World Cup. Enter a topic : Cricket world cup Generating the embedding for 'Cricket world cup' and collecting 100 articles related to it from ClickHouse... Enter your question : Which country has hosted the world cup most times England and Wales have hosted the Cricket World Cup the most times, with the tournament being held in these countries five times - in 1975, 1979, 1983, 1999, and 2019. $ ``` Code:
{"source_file": "dbpedia.md"}
[ 0.025737326592206955, -0.0037073995918035507, -0.05791925638914108, 0.06984211504459381, -0.002958551049232483, 0.013728110119700432, -0.025747383013367653, 0.03710304945707321, 0.01964101754128933, -0.10915970057249069, 0.006562735419720411, -0.026860186830163002, 0.09701130539178848, -0....
15e411a9-821e-4793-ae49-5cc3b8e80408
$ ``` Code: ```Python import sys import time from openai import OpenAI import clickhouse_connect ch_client = clickhouse_connect.get_client(compress=False) # Pass ClickHouse credentials here openai_client = OpenAI() # Set the OPENAI_API_KEY environment variable def get_embedding(text, model): text = text.replace("\n", " ") return openai_client.embeddings.create(input = [text], model=model, dimensions=1536).data[0].embedding while True: # Take the topic of interest from user print("Enter a topic : ", end="", flush=True) input_query = sys.stdin.readline() input_query = input_query.rstrip() # Generate an embedding vector for the search topic and query ClickHouse print("Generating the embedding for '" + input_query + "' and collecting 100 articles related to it from ClickHouse..."); embedding = get_embedding(input_query, model='text-embedding-3-large') params = {'v1':embedding, 'v2':100} result = ch_client.query("SELECT id,title,text FROM dbpedia ORDER BY cosineDistance(vector, %(v1)s) LIMIT %(v2)s", parameters=params) # Collect all the matching articles/documents results = "" for row in result.result_rows: results = results + row[2] print("\nEnter your question : ", end="", flush=True) question = sys.stdin.readline(); # Prompt for the OpenAI Chat API query = f"""Use the below content to answer the subsequent question. If the answer cannot be found, write "I don't know." Content: \"\"\" {results} \"\"\" Question: {question}""" GPT_MODEL = "gpt-3.5-turbo" response = openai_client.chat.completions.create( messages=[ {'role': 'system', 'content': "You answer questions about {input_query}."}, {'role': 'user', 'content': query}, ], model=GPT_MODEL, temperature=0, ) # Print the answer to the question! print(response.choices[0].message.content) print("\n") ```
{"source_file": "dbpedia.md"}
[ 0.041053205728530884, -0.015570898540318012, -0.08719760924577713, 0.07208890467882156, 0.0028182300738990307, 0.025007860735058784, 0.01962835155427456, 0.018182294443249702, 0.05080375820398331, -0.07713126391172409, 0.013802139088511467, 0.002886541187763214, 0.12093529850244522, 0.0124...
87c04701-6b66-4e1a-8223-b2def98e5533
description: 'Dataset containing the on-time performance of airline flights' sidebar_label: 'OnTime airline flight data' slug: /getting-started/example-datasets/ontime title: 'OnTime' doc_type: 'guide' keywords: ['example dataset', 'flight data', 'sample data', 'airline performance', 'benchmark'] This dataset contains data from Bureau of Transportation Statistics. Creating a table {#creating-a-table}
{"source_file": "ontime.md"}
[ 0.04396050050854683, -0.010388280265033245, -0.09390851110219955, 0.08629070967435837, 0.01565081626176834, -0.008321316912770271, 0.03729613870382309, 0.03430762141942978, -0.06649700552225113, 0.02380412630736828, 0.05435754731297493, -0.025011718273162842, -0.03347751125693321, 0.005657...
75836dc9-9691-4e41-af67-0cdd05ede7f1
sql CREATE TABLE `ontime` ( `Year` UInt16, `Quarter` UInt8, `Month` UInt8, `DayofMonth` UInt8, `DayOfWeek` UInt8, `FlightDate` Date, `Reporting_Airline` LowCardinality(String), `DOT_ID_Reporting_Airline` Int32, `IATA_CODE_Reporting_Airline` LowCardinality(String), `Tail_Number` LowCardinality(String), `Flight_Number_Reporting_Airline` LowCardinality(String), `OriginAirportID` Int32, `OriginAirportSeqID` Int32, `OriginCityMarketID` Int32, `Origin` FixedString(5), `OriginCityName` LowCardinality(String), `OriginState` FixedString(2), `OriginStateFips` FixedString(2), `OriginStateName` LowCardinality(String), `OriginWac` Int32, `DestAirportID` Int32, `DestAirportSeqID` Int32, `DestCityMarketID` Int32, `Dest` FixedString(5), `DestCityName` LowCardinality(String), `DestState` FixedString(2), `DestStateFips` FixedString(2), `DestStateName` LowCardinality(String), `DestWac` Int32, `CRSDepTime` Int32, `DepTime` Int32, `DepDelay` Int32, `DepDelayMinutes` Int32, `DepDel15` Int32, `DepartureDelayGroups` LowCardinality(String), `DepTimeBlk` LowCardinality(String), `TaxiOut` Int32, `WheelsOff` LowCardinality(String), `WheelsOn` LowCardinality(String), `TaxiIn` Int32, `CRSArrTime` Int32, `ArrTime` Int32, `ArrDelay` Int32, `ArrDelayMinutes` Int32, `ArrDel15` Int32, `ArrivalDelayGroups` LowCardinality(String), `ArrTimeBlk` LowCardinality(String), `Cancelled` Int8, `CancellationCode` FixedString(1), `Diverted` Int8, `CRSElapsedTime` Int32, `ActualElapsedTime` Int32, `AirTime` Int32, `Flights` Int32, `Distance` Int32, `DistanceGroup` Int8, `CarrierDelay` Int32, `WeatherDelay` Int32, `NASDelay` Int32,
{"source_file": "ontime.md"}
[ 0.1198309138417244, 0.014530378393828869, -0.02923651970922947, 0.0781666710972786, -0.060599491000175476, -0.007724286522716284, 0.05992336571216583, 0.05464540049433708, -0.04390936344861984, 0.028384564444422722, 0.04635295271873474, -0.11072061210870743, -0.01649552769958973, 0.0068174...
a5f11a7b-e302-430d-9f64-113b5ce5dfb4
`DistanceGroup` Int8, `CarrierDelay` Int32, `WeatherDelay` Int32, `NASDelay` Int32, `SecurityDelay` Int32, `LateAircraftDelay` Int32, `FirstDepTime` Int16, `TotalAddGTime` Int16, `LongestAddGTime` Int16, `DivAirportLandings` Int8, `DivReachedDest` Int8, `DivActualElapsedTime` Int16, `DivArrDelay` Int16, `DivDistance` Int16, `Div1Airport` LowCardinality(String), `Div1AirportID` Int32, `Div1AirportSeqID` Int32, `Div1WheelsOn` Int16, `Div1TotalGTime` Int16, `Div1LongestGTime` Int16, `Div1WheelsOff` Int16, `Div1TailNum` LowCardinality(String), `Div2Airport` LowCardinality(String), `Div2AirportID` Int32, `Div2AirportSeqID` Int32, `Div2WheelsOn` Int16, `Div2TotalGTime` Int16, `Div2LongestGTime` Int16, `Div2WheelsOff` Int16, `Div2TailNum` LowCardinality(String), `Div3Airport` LowCardinality(String), `Div3AirportID` Int32, `Div3AirportSeqID` Int32, `Div3WheelsOn` Int16, `Div3TotalGTime` Int16, `Div3LongestGTime` Int16, `Div3WheelsOff` Int16, `Div3TailNum` LowCardinality(String), `Div4Airport` LowCardinality(String), `Div4AirportID` Int32, `Div4AirportSeqID` Int32, `Div4WheelsOn` Int16, `Div4TotalGTime` Int16, `Div4LongestGTime` Int16, `Div4WheelsOff` Int16, `Div4TailNum` LowCardinality(String), `Div5Airport` LowCardinality(String), `Div5AirportID` Int32, `Div5AirportSeqID` Int32, `Div5WheelsOn` Int16, `Div5TotalGTime` Int16, `Div5LongestGTime` Int16, `Div5WheelsOff` Int16, `Div5TailNum` LowCardinality(String) ) ENGINE = MergeTree ORDER BY (Year, Quarter, Month, DayofMonth, FlightDate, IATA_CODE_Reporting_Airline);
{"source_file": "ontime.md"}
[ 0.024920547381043434, 0.03738716244697571, -0.06944822520017624, -0.06416360288858414, -0.024620018899440765, -0.045069627463817596, 0.03543062135577202, 0.056384824216365814, -0.04637798294425011, -0.058990154415369034, 0.06944770365953445, -0.07127079367637634, 0.05910465493798256, -0.00...
4abc498f-8cbc-4beb-8a39-7580b0dcc8c5
Import from raw data {#import-from-raw-data} Downloading data: bash wget --no-check-certificate --continue https://transtats.bts.gov/PREZIP/On_Time_Reporting_Carrier_On_Time_Performance_1987_present_{1987..2022}_{1..12}.zip Loading data with multiple threads: bash ls -1 *.zip | xargs -I{} -P $(nproc) bash -c "echo {}; unzip -cq {} '*.csv' | sed 's/\.00//g' | clickhouse-client --input_format_csv_empty_as_default 1 --query='INSERT INTO ontime FORMAT CSVWithNames'" (if you will have memory shortage or other issues on your server, remove the -P $(nproc) part) Import from a saved copy {#import-from-a-saved-copy} Alternatively, you can import data from a saved copy by the following query: sql INSERT INTO ontime SELECT * FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/ontime/csv_by_year/*.csv.gz', CSVWithNames) SETTINGS max_insert_threads = 40; The snapshot was created on 2022-05-29. Queries {#queries} Q0. sql SELECT avg(c1) FROM ( SELECT Year, Month, count(*) AS c1 FROM ontime GROUP BY Year, Month ); Q1. The number of flights per day from the year 2000 to 2008 sql SELECT DayOfWeek, count(*) AS c FROM ontime WHERE Year>=2000 AND Year<=2008 GROUP BY DayOfWeek ORDER BY c DESC; Q2. The number of flights delayed by more than 10 minutes, grouped by the day of the week, for 2000-2008 sql SELECT DayOfWeek, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year>=2000 AND Year<=2008 GROUP BY DayOfWeek ORDER BY c DESC; Q3. The number of delays by the airport for 2000-2008 sql SELECT Origin, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year>=2000 AND Year<=2008 GROUP BY Origin ORDER BY c DESC LIMIT 10; Q4. The number of delays by carrier for 2007 sql SELECT IATA_CODE_Reporting_Airline AS Carrier, count(*) FROM ontime WHERE DepDelay>10 AND Year=2007 GROUP BY Carrier ORDER BY count(*) DESC; Q5. The percentage of delays by carrier for 2007 sql SELECT Carrier, c, c2, c*100/c2 AS c3 FROM ( SELECT IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year=2007 GROUP BY Carrier ) q JOIN ( SELECT IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c2 FROM ontime WHERE Year=2007 GROUP BY Carrier ) qq USING Carrier ORDER BY c3 DESC; Better version of the same query: sql SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*100 AS c3 FROM ontime WHERE Year=2007 GROUP BY Carrier ORDER BY c3 DESC Q6. The previous request for a broader range of years, 2000-2008 sql SELECT Carrier, c, c2, c*100/c2 AS c3 FROM ( SELECT IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year>=2000 AND Year<=2008 GROUP BY Carrier ) q JOIN ( SELECT IATA_CODE_Reporting_Airline AS Carrier, count(*) AS c2 FROM ontime WHERE Year>=2000 AND Year<=2008 GROUP BY Carrier ) qq USING Carrier ORDER BY c3 DESC;
{"source_file": "ontime.md"}
[ -0.03935223072767258, 0.03288319706916809, -0.12290555983781815, 0.08750001341104507, 0.0320359505712986, -0.02645929530262947, -0.01812565326690674, -0.014126164838671684, 0.0019513109000399709, 0.028283435851335526, 0.018246471881866455, -0.05028550699353218, 0.08594519644975662, -0.0840...
7d0db5eb-f4ab-41b3-bf1d-1027223d17d6
Better version of the same query: sql SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*100 AS c3 FROM ontime WHERE Year>=2000 AND Year<=2008 GROUP BY Carrier ORDER BY c3 DESC; Q7. Percentage of flights delayed for more than 10 minutes, by year sql SELECT Year, c1/c2 FROM ( SELECT Year, count(*)*100 AS c1 FROM ontime WHERE DepDelay>10 GROUP BY Year ) q JOIN ( SELECT Year, count(*) AS c2 FROM ontime GROUP BY Year ) qq USING (Year) ORDER BY Year; Better version of the same query: sql SELECT Year, avg(DepDelay>10)*100 FROM ontime GROUP BY Year ORDER BY Year; Q8. The most popular destinations by the number of directly connected cities for various year ranges sql SELECT DestCityName, uniqExact(OriginCityName) AS u FROM ontime WHERE Year >= 2000 AND Year <= 2010 GROUP BY DestCityName ORDER BY u DESC LIMIT 10; Q9. sql SELECT Year, count(*) AS c1 FROM ontime GROUP BY Year; Q10. sql SELECT min(Year), max(Year), IATA_CODE_Reporting_Airline AS Carrier, count(*) AS cnt, sum(ArrDelayMinutes>30) AS flights_delayed, round(sum(ArrDelayMinutes>30)/count(*),2) AS rate FROM ontime WHERE DayOfWeek NOT IN (6,7) AND OriginState NOT IN ('AK', 'HI', 'PR', 'VI') AND DestState NOT IN ('AK', 'HI', 'PR', 'VI') AND FlightDate < '2010-01-01' GROUP BY Carrier HAVING cnt>100000 AND max(Year)>1990 ORDER BY rate DESC LIMIT 1000; Bonus: ```sql SELECT avg(cnt) FROM ( SELECT Year,Month,count(*) AS cnt FROM ontime WHERE DepDel15=1 GROUP BY Year,Month ); SELECT avg(c1) FROM ( SELECT Year,Month,count(*) AS c1 FROM ontime GROUP BY Year,Month ); SELECT DestCityName, uniqExact(OriginCityName) AS u FROM ontime GROUP BY DestCityName ORDER BY u DESC LIMIT 10; SELECT OriginCityName, DestCityName, count() AS c FROM ontime GROUP BY OriginCityName, DestCityName ORDER BY c DESC LIMIT 10; SELECT OriginCityName, count() AS c FROM ontime GROUP BY OriginCityName ORDER BY c DESC LIMIT 10; ``` You can also play with the data in Playground, example . This performance test was created by Vadim Tkachenko. See: https://www.percona.com/blog/2009/10/02/analyzing-air-traffic-performance-with-infobright-and-monetdb/ https://www.percona.com/blog/2009/10/26/air-traffic-queries-in-luciddb/ https://www.percona.com/blog/2009/11/02/air-traffic-queries-in-infinidb-early-alpha/ https://www.percona.com/blog/2014/04/21/using-apache-hadoop-and-impala-together-with-mysql-for-data-analysis/ https://www.percona.com/blog/2016/01/07/apache-spark-with-air-ontime-performance-data/ http://nickmakos.blogspot.ru/2012/08/analyzing-air-traffic-performance-with.html
{"source_file": "ontime.md"}
[ 0.11119335889816284, -0.046784937381744385, -0.009427799843251705, 0.09920668601989746, -0.04440326616168022, 0.0384678989648819, 0.07258492708206177, 0.025650106370449066, -0.0034428176004439592, 0.031138669699430466, 0.07397856563329697, -0.10686006397008896, -0.021298298612236977, 0.046...
91a49340-c999-4e57-b162-d7fa6802c44f
description: '131 million rows of weather observation data for the last 128 yrs' sidebar_label: 'Taiwan historical weather datasets' slug: /getting-started/example-datasets/tw-weather title: 'Taiwan historical weather datasets' doc_type: 'guide' keywords: ['example dataset', 'weather', 'taiwan', 'sample data', 'climate data'] This dataset contains historical meteorological observations measurements for the last 128 years. Each row is a measurement for a point in date time and weather station. The origin of this dataset is available here and the list of weather station numbers can be found here . The sources of meteorological datasets include the meteorological stations that are established by the Central Weather Administration (station code is beginning with C0, C1, and 4) and the agricultural meteorological stations belonging to the Council of Agriculture (station code other than those mentioned above): - StationId - MeasuredDate, the observation time - StnPres, the station air pressure - SeaPres, the sea level pressure - Td, the dew point temperature - RH, the relative humidity - Other elements where available Downloading the data {#downloading-the-data} A pre-processed version of the data for the ClickHouse, which has been cleaned, re-structured, and enriched. This dataset covers the years from 1896 to 2023. Download the original raw data and convert to the format required by ClickHouse. Users wanting to add their own columns may wish to explore or complete their approaches. Pre-processed data {#pre-processed-data} The dataset has also been re-structured from a measurement per line to a row per weather station id and measured date, i.e. csv StationId,MeasuredDate,StnPres,Tx,RH,WS,WD,WSGust,WDGust,Precp,GloblRad,TxSoil0cm,TxSoil5cm,TxSoil20cm,TxSoil50cm,TxSoil100cm,SeaPres,Td,PrecpHour,SunShine,TxSoil10cm,EvapA,Visb,UVI,Cloud Amount,TxSoil30cm,TxSoil200cm,TxSoil300cm,TxSoil500cm,VaporPressure C0X100,2016-01-01 01:00:00,1022.1,16.1,72,1.1,8.0,,,,,,,,,,,,,,,,,,,,,,, C0X100,2016-01-01 02:00:00,1021.6,16.0,73,1.2,358.0,,,,,,,,,,,,,,,,,,,,,,, C0X100,2016-01-01 03:00:00,1021.3,15.8,74,1.5,353.0,,,,,,,,,,,,,,,,,,,,,,, C0X100,2016-01-01 04:00:00,1021.2,15.8,74,1.7,8.0,,,,,,,,,,,,,,,,,,,,,,, It is easy to query and ensure that the resulting table has less sparse and some elements are null because they're not available to be measured in this weather station. This dataset is available in the following Google CloudStorage location. Either download the dataset to your local filesystem (and insert them with the ClickHouse client) or insert them directly into the ClickHouse (see Inserting from URL ). To download: ```bash wget https://storage.googleapis.com/taiwan-weather-observaiton-datasets/preprocessed_weather_daily_1896_2023.tar.gz Option: Validate the checksum md5sum preprocessed_weather_daily_1896_2023.tar.gz Checksum should be equal to: 11b484f5bd9ddafec5cfb131eb2dd008
{"source_file": "tw-weather.md"}
[ -0.03911858797073364, 0.025860359892249107, 0.012469795532524586, 0.030130207538604736, 0.011297444812953472, -0.027675561606884003, -0.029754074290394783, -0.004452452529221773, -0.0648382157087326, -0.012293734587728977, 0.039795778691768646, -0.09981857985258102, 0.040555234998464584, -...
5d11c59a-85e7-4d75-940f-de808b1d61a6
Option: Validate the checksum md5sum preprocessed_weather_daily_1896_2023.tar.gz Checksum should be equal to: 11b484f5bd9ddafec5cfb131eb2dd008 tar -xzvf preprocessed_weather_daily_1896_2023.tar.gz daily_weather_preprocessed_1896_2023.csv Option: Validate the checksum md5sum daily_weather_preprocessed_1896_2023.csv Checksum should be equal to: 1132248c78195c43d93f843753881754 ``` Original raw data {#original-raw-data} The following details are about the steps to download the original raw data to transform and convert as you want. Download {#download} To download the original raw data: ```bash mkdir tw_raw_weather_data && cd tw_raw_weather_data wget https://storage.googleapis.com/taiwan-weather-observaiton-datasets/raw_data_weather_daily_1896_2023.tar.gz Option: Validate the checksum md5sum raw_data_weather_daily_1896_2023.tar.gz Checksum should be equal to: b66b9f137217454d655e3004d7d1b51a tar -xzvf raw_data_weather_daily_1896_2023.tar.gz 466920_1928.csv 466920_1929.csv 466920_1930.csv 466920_1931.csv ... Option: Validate the checksum cat *.csv | md5sum Checksum should be equal to: b26db404bf84d4063fac42e576464ce1 ``` Retrieve the Taiwan weather stations {#retrieve-the-taiwan-weather-stations} ```bash wget -O weather_sta_list.csv https://github.com/Raingel/weather_station_list/raw/main/data/weather_sta_list.csv Option: Convert the UTF-8-BOM to UTF-8 encoding sed -i '1s/^\xEF\xBB\xBF//' weather_sta_list.csv ``` Create table schema {#create-table-schema} Create the MergeTree table in ClickHouse (from the ClickHouse client). bash CREATE TABLE tw_weather_data ( StationId String null, MeasuredDate DateTime64, StnPres Float64 null, SeaPres Float64 null, Tx Float64 null, Td Float64 null, RH Float64 null, WS Float64 null, WD Float64 null, WSGust Float64 null, WDGust Float64 null, Precp Float64 null, PrecpHour Float64 null, SunShine Float64 null, GloblRad Float64 null, TxSoil0cm Float64 null, TxSoil5cm Float64 null, TxSoil10cm Float64 null, TxSoil20cm Float64 null, TxSoil50cm Float64 null, TxSoil100cm Float64 null, TxSoil30cm Float64 null, TxSoil200cm Float64 null, TxSoil300cm Float64 null, TxSoil500cm Float64 null, VaporPressure Float64 null, UVI Float64 null, "Cloud Amount" Float64 null, EvapA Float64 null, Visb Float64 null ) ENGINE = MergeTree ORDER BY (MeasuredDate); Inserting into ClickHouse {#inserting-into-clickhouse} Inserting from local file {#inserting-from-local-file} Data can be inserted from a local file as follows (from the ClickHouse client): sql INSERT INTO tw_weather_data FROM INFILE '/path/to/daily_weather_preprocessed_1896_2023.csv' where /path/to represents the specific user path to the local file on the disk. And the sample response output is as follows after inserting data into the ClickHouse:
{"source_file": "tw-weather.md"}
[ -0.050431448966264725, 0.09957946836948395, -0.03858083486557007, -0.05684182047843933, 0.11678753793239594, -0.10779844224452972, -0.0009334846981801093, -0.04100888594985008, -0.06430044770240784, 0.05997123941779137, -0.018385840579867363, -0.07751288264989853, 0.016927821561694145, -0....
f9ae65df-b49f-4655-aaf4-1b14bf73bbc2
where /path/to represents the specific user path to the local file on the disk. And the sample response output is as follows after inserting data into the ClickHouse: ```response Query id: 90e4b524-6e14-4855-817c-7e6f98fbeabb Ok. 131985329 rows in set. Elapsed: 71.770 sec. Processed 131.99 million rows, 10.06 GB (1.84 million rows/s., 140.14 MB/s.) Peak memory usage: 583.23 MiB. ``` Inserting from URL {#inserting-from-url} ```sql INSERT INTO tw_weather_data SELECT * FROM url('https://storage.googleapis.com/taiwan-weather-observaiton-datasets/daily_weather_preprocessed_1896_2023.csv', 'CSVWithNames') ``` To know how to speed this up, please see our blog post on tuning large data loads . Check data rows and sizes {#check-data-rows-and-sizes} Let's see how many rows are inserted: sql SELECT formatReadableQuantity(count()) FROM tw_weather_data; response ┌─formatReadableQuantity(count())─┐ │ 131.99 million │ └─────────────────────────────────┘ Let's see how much disk space are used for this table: sql SELECT formatReadableSize(sum(bytes)) AS disk_size, formatReadableSize(sum(data_uncompressed_bytes)) AS uncompressed_size FROM system.parts WHERE (`table` = 'tw_weather_data') AND active response ┌─disk_size─┬─uncompressed_size─┐ │ 2.13 GiB │ 32.94 GiB │ └───────────┴───────────────────┘ Sample queries {#sample-queries} Q1: Retrieve the highest dew point temperature for each weather station in the specific year {#q1-retrieve-the-highest-dew-point-temperature-for-each-weather-station-in-the-specific-year} ```sql SELECT StationId, max(Td) AS max_td FROM tw_weather_data WHERE (year(MeasuredDate) = 2023) AND (Td IS NOT NULL) GROUP BY StationId ┌─StationId─┬─max_td─┐ │ 466940 │ 1 │ │ 467300 │ 1 │ │ 467540 │ 1 │ │ 467490 │ 1 │ │ 467080 │ 1 │ │ 466910 │ 1 │ │ 467660 │ 1 │ │ 467270 │ 1 │ │ 467350 │ 1 │ │ 467571 │ 1 │ │ 466920 │ 1 │ │ 467650 │ 1 │ │ 467550 │ 1 │ │ 467480 │ 1 │ │ 467610 │ 1 │ │ 467050 │ 1 │ │ 467590 │ 1 │ │ 466990 │ 1 │ │ 467060 │ 1 │ │ 466950 │ 1 │ │ 467620 │ 1 │ │ 467990 │ 1 │ │ 466930 │ 1 │ │ 467110 │ 1 │ │ 466881 │ 1 │ │ 467410 │ 1 │ │ 467441 │ 1 │ │ 467420 │ 1 │ │ 467530 │ 1 │ │ 466900 │ 1 │ └───────────┴────────┘ 30 rows in set. Elapsed: 0.045 sec. Processed 6.41 million rows, 187.33 MB (143.92 million rows/s., 4.21 GB/s.) ``` Q2: Raw data fetching with the specific duration time range, fields and weather station {#q2-raw-data-fetching-with-the-specific-duration-time-range-fields-and-weather-station}
{"source_file": "tw-weather.md"}
[ 0.019654307514429092, 0.03345165029168129, -0.03339537978172302, 0.07750806957483292, -0.011082208715379238, -0.16387556493282318, -0.023542769253253937, -0.02406959980726242, 0.0005598643911071122, 0.05460483208298683, -0.0016337836859747767, -0.01708882860839367, 0.0422494113445282, -0.1...
f4204b8b-f7f2-4c79-9ce8-04c4ed34c2a6
Q2: Raw data fetching with the specific duration time range, fields and weather station {#q2-raw-data-fetching-with-the-specific-duration-time-range-fields-and-weather-station} sql SELECT StnPres, SeaPres, Tx, Td, RH, WS, WD, WSGust, WDGust, Precp, PrecpHour FROM tw_weather_data WHERE (StationId = 'C0UB10') AND (MeasuredDate >= '2023-12-23') AND (MeasuredDate < '2023-12-24') ORDER BY MeasuredDate ASC LIMIT 10 ```response ┌─StnPres─┬─SeaPres─┬───Tx─┬───Td─┬─RH─┬──WS─┬──WD─┬─WSGust─┬─WDGust─┬─Precp─┬─PrecpHour─┐ │ 1029.5 │ ᴺᵁᴸᴸ │ 11.8 │ ᴺᵁᴸᴸ │ 78 │ 2.7 │ 271 │ 5.5 │ 275 │ -99.8 │ -99.8 │ │ 1029.8 │ ᴺᵁᴸᴸ │ 12.3 │ ᴺᵁᴸᴸ │ 78 │ 2.7 │ 289 │ 5.5 │ 308 │ -99.8 │ -99.8 │ │ 1028.6 │ ᴺᵁᴸᴸ │ 12.3 │ ᴺᵁᴸᴸ │ 79 │ 2.3 │ 251 │ 6.1 │ 289 │ -99.8 │ -99.8 │ │ 1028.2 │ ᴺᵁᴸᴸ │ 13 │ ᴺᵁᴸᴸ │ 75 │ 4.3 │ 312 │ 7.5 │ 316 │ -99.8 │ -99.8 │ │ 1027.8 │ ᴺᵁᴸᴸ │ 11.1 │ ᴺᵁᴸᴸ │ 89 │ 7.1 │ 310 │ 11.6 │ 322 │ -99.8 │ -99.8 │ │ 1027.8 │ ᴺᵁᴸᴸ │ 11.6 │ ᴺᵁᴸᴸ │ 90 │ 3.1 │ 269 │ 10.7 │ 295 │ -99.8 │ -99.8 │ │ 1027.9 │ ᴺᵁᴸᴸ │ 12.3 │ ᴺᵁᴸᴸ │ 89 │ 4.7 │ 296 │ 8.1 │ 310 │ -99.8 │ -99.8 │ │ 1028.2 │ ᴺᵁᴸᴸ │ 12.2 │ ᴺᵁᴸᴸ │ 94 │ 2.5 │ 246 │ 7.1 │ 283 │ -99.8 │ -99.8 │ │ 1028.4 │ ᴺᵁᴸᴸ │ 12.5 │ ᴺᵁᴸᴸ │ 94 │ 3.1 │ 265 │ 4.8 │ 297 │ -99.8 │ -99.8 │ │ 1028.3 │ ᴺᵁᴸᴸ │ 13.6 │ ᴺᵁᴸᴸ │ 91 │ 1.2 │ 273 │ 4.4 │ 256 │ -99.8 │ -99.8 │ └─────────┴─────────┴──────┴──────┴────┴─────┴─────┴────────┴────────┴───────┴───────────┘ 10 rows in set. Elapsed: 0.009 sec. Processed 91.70 thousand rows, 2.33 MB (9.67 million rows/s., 245.31 MB/s.) ``` Credits {#credits} We would like to acknowledge the efforts of the Central Weather Administration and Agricultural Meteorological Observation Network (Station) of the Council of Agriculture for preparing, cleaning, and distributing this dataset. We appreciate your efforts. Ou, J.-H., Kuo, C.-H., Wu, Y.-F., Lin, G.-C., Lee, M.-H., Chen, R.-K., Chou, H.-P., Wu, H.-Y., Chu, S.-C., Lai, Q.-J., Tsai, Y.-C., Lin, C.-C., Kuo, C.-C., Liao, C.-T., Chen, Y.-N., Chu, Y.-W., Chen, C.-Y., 2023. Application-oriented deep learning model for early warning of rice blast in Taiwan. Ecological Informatics 73, 101950. https://doi.org/10.1016/j.ecoinf.2022.101950 [13/12/2022]
{"source_file": "tw-weather.md"}
[ -0.00748435640707612, 0.01880768872797489, 0.07433842867612839, 0.07745610177516937, -0.042440131306648254, 0.011460709385573864, -0.007911402732133865, 0.011647881008684635, -0.0026932116597890854, 0.04640372842550278, 0.006649756338447332, -0.07678878307342529, -0.027757203206419945, -0....
71bb7479-cb22-414c-bbfc-0d0db94b353c
description: 'Ingest and query Tab Separated Value data in 5 steps' sidebar_label: 'NYPD complaint data' slug: /getting-started/example-datasets/nypd_complaint_data title: 'NYPD Complaint Data' doc_type: 'guide' keywords: ['example dataset', 'nypd', 'crime data', 'sample data', 'public data'] Tab separated value, or TSV, files are common and may include field headings as the first line of the file. ClickHouse can ingest TSVs, and also can query TSVs without ingesting the files. This guide covers both of these cases. If you need to query or ingest CSV files, the same techniques work, simply substitute TSV with CSV in your format arguments. While working through this guide you will: - Investigate : Query the structure and content of the TSV file. - Determine the target ClickHouse schema : Choose proper data types and map the existing data to those types. - Create a ClickHouse table . - Preprocess and stream the data to ClickHouse. - Run some queries against ClickHouse. The dataset used in this guide comes from the NYC Open Data team, and contains data about "all valid felony, misdemeanor, and violation crimes reported to the New York City Police Department (NYPD)". At the time of writing, the data file is 166MB, but it is updated regularly. Source : data.cityofnewyork.us Terms of use : https://www1.nyc.gov/home/terms-of-use.page Prerequisites {#prerequisites} Download the dataset by visiting the NYPD Complaint Data Current (Year To Date) page, clicking the Export button, and choosing TSV for Excel . Install ClickHouse server and client A note about the commands described in this guide {#a-note-about-the-commands-described-in-this-guide} There are two types of commands in this guide: - Some of the commands are querying the TSV files, these are run at the command prompt. - The rest of the commands are querying ClickHouse, and these are run in the clickhouse-client or Play UI. :::note The examples in this guide assume that you have saved the TSV file to ${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv , please adjust the commands if needed. ::: Familiarize yourself with the TSV file {#familiarize-yourself-with-the-tsv-file} Before starting to work with the ClickHouse database familiarize yourself with the data. Look at the fields in the source TSV file {#look-at-the-fields-in-the-source-tsv-file} This is an example of a command to query a TSV file, but don't run it yet. sh clickhouse-local --query \ "describe file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames')" Sample response response CMPLNT_NUM Nullable(Float64) ADDR_PCT_CD Nullable(Float64) BORO_NM Nullable(String) CMPLNT_FR_DT Nullable(String) CMPLNT_FR_TM Nullable(String)
{"source_file": "nypd_complaint_data.md"}
[ 0.027127573266625404, 0.02548246644437313, -0.06688456237316132, 0.05005655810236931, -0.002546544885262847, 0.034828975796699524, 0.04534688964486122, 0.06171022728085518, -0.08803253620862961, -0.001865824800916016, 0.04125264286994934, -0.011444303207099438, 0.02845950983464718, -0.0096...
28201ba6-0d77-462e-863d-b1dd4212a0b5
:::tip Most of the time the above command will let you know which fields in the input data are numeric, and which are strings, and which are tuples. This is not always the case. Because ClickHouse is routineley used with datasets containing billions of records there is a default number (100) of rows examined to infer the schema in order to avoid parsing billions of rows to infer the schema. The response below may not match what you see, as the dataset is updated several times each year. Looking at the Data Dictionary you can see that CMPLNT_NUM is specified as text, and not numeric. By overriding the default of 100 rows for inference with the setting SETTINGS input_format_max_rows_to_read_for_schema_inference=2000 you can get a better idea of the content. Note: as of version 22.5 the default is now 25,000 rows for inferring the schema, so only change the setting if you are on an older version or if you need more than 25,000 rows to be sampled. ::: Run this command at your command prompt. You will be using clickhouse-local to query the data in the TSV file you downloaded. sh clickhouse-local --input_format_max_rows_to_read_for_schema_inference=2000 \ --query \ "describe file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames')" Result: response CMPLNT_NUM Nullable(String) ADDR_PCT_CD Nullable(Float64) BORO_NM Nullable(String) CMPLNT_FR_DT Nullable(String) CMPLNT_FR_TM Nullable(String) CMPLNT_TO_DT Nullable(String) CMPLNT_TO_TM Nullable(String) CRM_ATPT_CPTD_CD Nullable(String) HADEVELOPT Nullable(String) HOUSING_PSA Nullable(Float64) JURISDICTION_CODE Nullable(Float64) JURIS_DESC Nullable(String) KY_CD Nullable(Float64) LAW_CAT_CD Nullable(String) LOC_OF_OCCUR_DESC Nullable(String) OFNS_DESC Nullable(String) PARKS_NM Nullable(String) PATROL_BORO Nullable(String) PD_CD Nullable(Float64) PD_DESC Nullable(String) PREM_TYP_DESC Nullable(String) RPT_DT Nullable(String) STATION_NAME Nullable(String) SUSP_AGE_GROUP Nullable(String) SUSP_RACE Nullable(String) SUSP_SEX Nullable(String) TRANSIT_DISTRICT Nullable(Float64) VIC_AGE_GROUP Nullable(String) VIC_RACE Nullable(String) VIC_SEX Nullable(String) X_COORD_CD Nullable(Float64) Y_COORD_CD Nullable(Float64) Latitude Nullable(Float64) Longitude Nullable(Float64) Lat_Lon Tuple(Nullable(Float64), Nullable(Float64)) New Georeferenced Column Nullable(String)
{"source_file": "nypd_complaint_data.md"}
[ 0.07387964427471161, -0.03233640640974045, -0.05258765444159508, -0.0042243413627147675, -0.012009863741695881, 0.0018459075363352895, -0.009526138193905354, -0.026889096945524216, -0.07509847730398178, 0.03622747212648392, 0.054905228316783905, -0.0905100479722023, -0.010490108281373978, ...
2afe2c9b-be6a-4a55-be2d-b5419d2aee49
At this point you should check that the columns in the TSV file match the names and types specified in the Columns in this Dataset section of the dataset web page . The data types are not very specific, all numeric fields are set to Nullable(Float64) , and all other fields are Nullable(String) . When you create a ClickHouse table to store the data you can specify more appropriate and performant types. Determine the proper schema {#determine-the-proper-schema} In order to figure out what types should be used for the fields it is necessary to know what the data looks like. For example, the field JURISDICTION_CODE is a numeric: should it be a UInt8 , or an Enum , or is Float64 appropriate? sql clickhouse-local --input_format_max_rows_to_read_for_schema_inference=2000 \ --query \ "select JURISDICTION_CODE, count() FROM file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames') GROUP BY JURISDICTION_CODE ORDER BY JURISDICTION_CODE FORMAT PrettyCompact" Result: response ┌─JURISDICTION_CODE─┬─count()─┐ │ 0 │ 188875 │ │ 1 │ 4799 │ │ 2 │ 13833 │ │ 3 │ 656 │ │ 4 │ 51 │ │ 6 │ 5 │ │ 7 │ 2 │ │ 9 │ 13 │ │ 11 │ 14 │ │ 12 │ 5 │ │ 13 │ 2 │ │ 14 │ 70 │ │ 15 │ 20 │ │ 72 │ 159 │ │ 87 │ 9 │ │ 88 │ 75 │ │ 97 │ 405 │ └───────────────────┴─────────┘ The query response shows that the JURISDICTION_CODE fits well in a UInt8 . Similarly, look at some of the String fields and see if they are well suited to being DateTime or LowCardinality(String) fields. For example, the field PARKS_NM is described as "Name of NYC park, playground or greenspace of occurrence, if applicable (state parks are not included)". The names of parks in New York City may be a good candidate for a LowCardinality(String) : sh clickhouse-local --input_format_max_rows_to_read_for_schema_inference=2000 \ --query \ "select count(distinct PARKS_NM) FROM file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames') FORMAT PrettyCompact" Result: response ┌─uniqExact(PARKS_NM)─┐ │ 319 │ └─────────────────────┘ Have a look at some of the park names: sql clickhouse-local --input_format_max_rows_to_read_for_schema_inference=2000 \ --query \ "select distinct PARKS_NM FROM file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames') LIMIT 10 FORMAT PrettyCompact" Result:
{"source_file": "nypd_complaint_data.md"}
[ 0.07952296733856201, -0.03672267869114876, -0.029418431222438812, 0.008233627304434776, -0.07585098594427109, 0.01663121022284031, 0.0012466865591704845, -0.018348248675465584, -0.11722336709499359, 0.06937308609485626, 0.07209720462560654, -0.07478391379117966, 0.007907697930932045, -0.02...
68e83994-d19e-44da-a78d-30e0a79eadc1
Result: response ┌─PARKS_NM───────────────────┐ │ (null) │ │ ASSER LEVY PARK │ │ JAMES J WALKER PARK │ │ BELT PARKWAY/SHORE PARKWAY │ │ PROSPECT PARK │ │ MONTEFIORE SQUARE │ │ SUTTON PLACE PARK │ │ JOYCE KILMER PARK │ │ ALLEY ATHLETIC PLAYGROUND │ │ ASTORIA PARK │ └────────────────────────────┘ The dataset in use at the time of writing has only a few hundred distinct parks and playgrounds in the PARK_NM column. This is a small number based on the LowCardinality recommendation to stay below 10,000 distinct strings in a LowCardinality(String) field. DateTime fields {#datetime-fields} Based on the Columns in this Dataset section of the dataset web page there are date and time fields for the start and end of the reported event. Looking at the min and max of the CMPLNT_FR_DT and CMPLT_TO_DT gives an idea of whether or not the fields are always populated: sh title="CMPLNT_FR_DT" clickhouse-local --input_format_max_rows_to_read_for_schema_inference=2000 \ --query \ "select min(CMPLNT_FR_DT), max(CMPLNT_FR_DT) FROM file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames') FORMAT PrettyCompact" Result: response ┌─min(CMPLNT_FR_DT)─┬─max(CMPLNT_FR_DT)─┐ │ 01/01/1973 │ 12/31/2021 │ └───────────────────┴───────────────────┘ sh title="CMPLNT_TO_DT" clickhouse-local --input_format_max_rows_to_read_for_schema_inference=2000 \ --query \ "select min(CMPLNT_TO_DT), max(CMPLNT_TO_DT) FROM file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames') FORMAT PrettyCompact" Result: response ┌─min(CMPLNT_TO_DT)─┬─max(CMPLNT_TO_DT)─┐ │ │ 12/31/2021 │ └───────────────────┴───────────────────┘ sh title="CMPLNT_FR_TM" clickhouse-local --input_format_max_rows_to_read_for_schema_inference=2000 \ --query \ "select min(CMPLNT_FR_TM), max(CMPLNT_FR_TM) FROM file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames') FORMAT PrettyCompact" Result: response ┌─min(CMPLNT_FR_TM)─┬─max(CMPLNT_FR_TM)─┐ │ 00:00:00 │ 23:59:00 │ └───────────────────┴───────────────────┘ sh title="CMPLNT_TO_TM" clickhouse-local --input_format_max_rows_to_read_for_schema_inference=2000 \ --query \ "select min(CMPLNT_TO_TM), max(CMPLNT_TO_TM) FROM file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames') FORMAT PrettyCompact" Result: response ┌─min(CMPLNT_TO_TM)─┬─max(CMPLNT_TO_TM)─┐ │ (null) │ 23:59:00 │ └───────────────────┴───────────────────┘ Make a plan {#make-a-plan} Based on the above investigation: - JURISDICTION_CODE should be cast as UInt8 . - PARKS_NM should be cast to LowCardinality(String)
{"source_file": "nypd_complaint_data.md"}
[ 0.07521393895149231, -0.01576595939695835, 0.06965617090463638, 0.03631313517689705, 0.048041872680187225, 0.02457381784915924, 0.006659934297204018, 0.01911168359220028, 0.01477119605988264, -0.025361593812704086, 0.024088306352496147, -0.07848095893859863, -0.0383729413151741, -0.0398649...
711671c8-a6bc-4f54-840b-2b56d53400e4
Make a plan {#make-a-plan} Based on the above investigation: - JURISDICTION_CODE should be cast as UInt8 . - PARKS_NM should be cast to LowCardinality(String) - CMPLNT_FR_DT and CMPLNT_FR_TM are always populated (possibly with a default time of 00:00:00 ) - CMPLNT_TO_DT and CMPLNT_TO_TM may be empty - Dates and times are stored in separate fields in the source - Dates are mm/dd/yyyy format - Times are hh:mm:ss format - Dates and times can be concatenated into DateTime types - There are some dates before January 1st 1970, which means we need a 64 bit DateTime :::note There are many more changes to be made to the types, they all can be determined by following the same investigation steps. Look at the number of distinct strings in a field, the min and max of the numerics, and make your decisions. The table schema that is given later in the guide has many low cardinality strings and unsigned integer fields and very few floating point numerics. ::: Concatenate the date and time fields {#concatenate-the-date-and-time-fields} To concatenate the date and time fields CMPLNT_FR_DT and CMPLNT_FR_TM into a single String that can be cast to a DateTime , select the two fields joined by the concatenation operator: CMPLNT_FR_DT || ' ' || CMPLNT_FR_TM . The CMPLNT_TO_DT and CMPLNT_TO_TM fields are handled similarly. sh clickhouse-local --input_format_max_rows_to_read_for_schema_inference=2000 \ --query \ "select CMPLNT_FR_DT || ' ' || CMPLNT_FR_TM AS complaint_begin FROM file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames') LIMIT 10 FORMAT PrettyCompact" Result: response ┌─complaint_begin─────┐ │ 07/29/2010 00:01:00 │ │ 12/01/2011 12:00:00 │ │ 04/01/2017 15:00:00 │ │ 03/26/2018 17:20:00 │ │ 01/01/2019 00:00:00 │ │ 06/14/2019 00:00:00 │ │ 11/29/2021 20:00:00 │ │ 12/04/2021 00:35:00 │ │ 12/05/2021 12:50:00 │ │ 12/07/2021 20:30:00 │ └─────────────────────┘ Convert the date and time String to a DateTime64 type {#convert-the-date-and-time-string-to-a-datetime64-type} Earlier in the guide we discovered that there are dates in the TSV file before January 1st 1970, which means that we need a 64 bit DateTime type for the dates. The dates also need to be converted from MM/DD/YYYY to YYYY/MM/DD format. Both of these can be done with parseDateTime64BestEffort() . sh clickhouse-local --input_format_max_rows_to_read_for_schema_inference=2000 \ --query \ "WITH (CMPLNT_FR_DT || ' ' || CMPLNT_FR_TM) AS CMPLNT_START, (CMPLNT_TO_DT || ' ' || CMPLNT_TO_TM) AS CMPLNT_END select parseDateTime64BestEffort(CMPLNT_START) AS complaint_begin, parseDateTime64BestEffortOrNull(CMPLNT_END) AS complaint_end FROM file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames') ORDER BY complaint_begin ASC LIMIT 25 FORMAT PrettyCompact"
{"source_file": "nypd_complaint_data.md"}
[ 0.061398960649967194, 0.09787234663963318, -0.028362249955534935, -0.05394556745886803, -0.03896867856383324, 0.007894148118793964, -0.06477032601833344, 0.05581085756421089, -0.09372308850288391, 0.0235341414809227, -0.02657892182469368, -0.10811261087656021, -0.029053540900349617, -0.012...
2c63664c-cb9c-4d96-b66b-ac21539228a1
Lines 2 and 3 above contain the concatenation from the previous step, and lines 4 and 5 above parse the strings into DateTime64 . As the complaint end time is not guaranteed to exist parseDateTime64BestEffortOrNull is used. Result: response ┌─────────complaint_begin─┬───────────complaint_end─┐ │ 1925-01-01 10:00:00.000 │ 2021-02-12 09:30:00.000 │ │ 1925-01-01 11:37:00.000 │ 2022-01-16 11:49:00.000 │ │ 1925-01-01 15:00:00.000 │ 2021-12-31 00:00:00.000 │ │ 1925-01-01 15:00:00.000 │ 2022-02-02 22:00:00.000 │ │ 1925-01-01 19:00:00.000 │ 2022-04-14 05:00:00.000 │ │ 1955-09-01 19:55:00.000 │ 2022-08-01 00:45:00.000 │ │ 1972-03-17 11:40:00.000 │ 2022-03-17 11:43:00.000 │ │ 1972-05-23 22:00:00.000 │ 2022-05-24 09:00:00.000 │ │ 1972-05-30 23:37:00.000 │ 2022-05-30 23:50:00.000 │ │ 1972-07-04 02:17:00.000 │ ᴺᵁᴸᴸ │ │ 1973-01-01 00:00:00.000 │ ᴺᵁᴸᴸ │ │ 1975-01-01 00:00:00.000 │ ᴺᵁᴸᴸ │ │ 1976-11-05 00:01:00.000 │ 1988-10-05 23:59:00.000 │ │ 1977-01-01 00:00:00.000 │ 1977-01-01 23:59:00.000 │ │ 1977-12-20 00:01:00.000 │ ᴺᵁᴸᴸ │ │ 1981-01-01 00:01:00.000 │ ᴺᵁᴸᴸ │ │ 1981-08-14 00:00:00.000 │ 1987-08-13 23:59:00.000 │ │ 1983-01-07 00:00:00.000 │ 1990-01-06 00:00:00.000 │ │ 1984-01-01 00:01:00.000 │ 1984-12-31 23:59:00.000 │ │ 1985-01-01 12:00:00.000 │ 1987-12-31 15:00:00.000 │ │ 1985-01-11 09:00:00.000 │ 1985-12-31 12:00:00.000 │ │ 1986-03-16 00:05:00.000 │ 2022-03-16 00:45:00.000 │ │ 1987-01-07 00:00:00.000 │ 1987-01-09 00:00:00.000 │ │ 1988-04-03 18:30:00.000 │ 2022-08-03 09:45:00.000 │ │ 1988-07-29 12:00:00.000 │ 1990-07-27 22:00:00.000 │ └─────────────────────────┴─────────────────────────┘ :::note The dates shown as 1925 above are from errors in the data. There are several records in the original data with dates in the years 1019 - 1022 that should be 2019 - 2022 . They are being stored as Jan 1st 1925 as that is the earliest date with a 64 bit DateTime. ::: Create a table {#create-a-table} The decisions made above on the data types used for the columns are reflected in the table schema below. We also need to decide on the ORDER BY and PRIMARY KEY used for the table. At least one of ORDER BY or PRIMARY KEY must be specified. Here are some guidelines on deciding on the columns to includes in ORDER BY , and more information is in the Next Steps section at the end of this document. ORDER BY and PRIMARY KEY clauses {#order-by-and-primary-key-clauses} The ORDER BY tuple should include fields that are used in query filters To maximize compression on disk the ORDER BY tuple should be ordered by ascending cardinality If it exists, the PRIMARY KEY tuple must be a subset of the ORDER BY tuple If only ORDER BY is specified, then the same tuple will be used as PRIMARY KEY The primary key index is created using the PRIMARY KEY tuple if specified, otherwise the ORDER BY tuple
{"source_file": "nypd_complaint_data.md"}
[ -0.04031868278980255, 0.07563973218202591, 0.054791271686553955, 0.014591529034078121, -0.002658543176949024, -0.026917865499854088, -0.041940730065107346, 0.009751406498253345, 0.018617572262883186, 0.019912071526050568, 0.0747920572757721, -0.05668704956769943, 0.004429289605468512, -0.0...
32401ac1-fd0a-4fba-a1ba-486ac5d44867
If only ORDER BY is specified, then the same tuple will be used as PRIMARY KEY The primary key index is created using the PRIMARY KEY tuple if specified, otherwise the ORDER BY tuple The PRIMARY KEY index is kept in main memory Looking at the dataset and the questions that might be answered by querying it we might decide that we would look at the types of crimes reported over time in the five boroughs of New York City. These fields might be then included in the ORDER BY : | Column | Description (from the data dictionary) | | ----------- | --------------------------------------------------- | | OFNS_DESC | Description of offense corresponding with key code | | RPT_DT | Date event was reported to police | | BORO_NM | The name of the borough in which the incident occurred | Querying the TSV file for the cardinality of the three candidate columns: bash clickhouse-local --input_format_max_rows_to_read_for_schema_inference=2000 \ --query \ "select formatReadableQuantity(uniq(OFNS_DESC)) as cardinality_OFNS_DESC, formatReadableQuantity(uniq(RPT_DT)) as cardinality_RPT_DT, formatReadableQuantity(uniq(BORO_NM)) as cardinality_BORO_NM FROM file('${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv', 'TSVWithNames') FORMAT PrettyCompact" Result: response ┌─cardinality_OFNS_DESC─┬─cardinality_RPT_DT─┬─cardinality_BORO_NM─┐ │ 60.00 │ 306.00 │ 6.00 │ └───────────────────────┴────────────────────┴─────────────────────┘ Ordering by cardinality, the ORDER BY becomes: sql ORDER BY ( BORO_NM, OFNS_DESC, RPT_DT ) :::note The table below will use more easily read column names, the above names will be mapped to sql ORDER BY ( borough, offense_description, date_reported ) ::: Putting together the changes to data types and the ORDER BY tuple gives this table structure:
{"source_file": "nypd_complaint_data.md"}
[ 0.028936538845300674, -0.003961690701544285, -0.0339096337556839, -0.05566379055380821, 0.0320371575653553, 0.04785580560564995, 0.040262721478939056, 0.0055510480888187885, 0.016047213226556778, 0.08763725310564041, 0.07829984277486801, 0.09011159837245941, 0.039100874215364456, -0.089760...
3ecd915a-6969-4651-821f-d0f02ad4c166
sql ORDER BY ( borough, offense_description, date_reported ) ::: Putting together the changes to data types and the ORDER BY tuple gives this table structure: sql CREATE TABLE NYPD_Complaint ( complaint_number String, precinct UInt8, borough LowCardinality(String), complaint_begin DateTime64(0,'America/New_York'), complaint_end DateTime64(0,'America/New_York'), was_crime_completed String, housing_authority String, housing_level_code UInt32, jurisdiction_code UInt8, jurisdiction LowCardinality(String), offense_code UInt8, offense_level LowCardinality(String), location_descriptor LowCardinality(String), offense_description LowCardinality(String), park_name LowCardinality(String), patrol_borough LowCardinality(String), PD_CD UInt16, PD_DESC String, location_type LowCardinality(String), date_reported Date, transit_station LowCardinality(String), suspect_age_group LowCardinality(String), suspect_race LowCardinality(String), suspect_sex LowCardinality(String), transit_district UInt8, victim_age_group LowCardinality(String), victim_race LowCardinality(String), victim_sex LowCardinality(String), NY_x_coordinate UInt32, NY_y_coordinate UInt32, Latitude Float64, Longitude Float64 ) ENGINE = MergeTree ORDER BY ( borough, offense_description, date_reported ) Finding the primary key of a table {#finding-the-primary-key-of-a-table} The ClickHouse system database, specifically system.table has all of the information about the table you just created. This query shows the ORDER BY (sorting key), and the PRIMARY KEY : sql SELECT partition_key, sorting_key, primary_key, table FROM system.tables WHERE table = 'NYPD_Complaint' FORMAT Vertical Response ```response Query id: 6a5b10bf-9333-4090-b36e-c7f08b1d9e01 Row 1: ────── partition_key: sorting_key: borough, offense_description, date_reported primary_key: borough, offense_description, date_reported table: NYPD_Complaint 1 row in set. Elapsed: 0.001 sec. ``` Preprocess and import data {#preprocess-import-data} We will use clickhouse-local tool for data preprocessing and clickhouse-client to upload it. clickhouse-local arguments used {#clickhouse-local-arguments-used} :::tip
{"source_file": "nypd_complaint_data.md"}
[ 0.08557862043380737, -0.0324089489877224, 0.03574318811297417, -0.001483598374761641, -0.027094600722193718, 0.016796806827187538, 0.007460818160325289, 0.00610559294000268, -0.08852682262659073, 0.08730673789978027, 0.11247014254331589, -0.005447846371680498, 0.014341368339955807, -0.0402...
f700d40f-2398-4451-8a0f-69b3a0c873ef
We will use clickhouse-local tool for data preprocessing and clickhouse-client to upload it. clickhouse-local arguments used {#clickhouse-local-arguments-used} :::tip table='input' appears in the arguments to clickhouse-local below. clickhouse-local takes the provided input ( cat ${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv ) and inserts the input into a table. By default the table is named table . In this guide the name of the table is set to input to make the data flow clearer. The final argument to clickhouse-local is a query that selects from the table ( FROM input ) which is then piped to clickhouse-client to populate the table NYPD_Complaint . :::
{"source_file": "nypd_complaint_data.md"}
[ 0.015029003843665123, -0.021961959078907967, -0.043068259954452515, 0.03224882483482361, -0.00028708731406368315, 0.0013039764016866684, 0.015640556812286377, 0.004763512406498194, -0.08915023505687714, 0.05434351786971092, 0.044679466634988785, -0.01729954220354557, 0.037675559520721436, ...
9ebc9e5c-6070-4bbd-8356-e3525d722608
sql cat ${HOME}/NYPD_Complaint_Data_Current__Year_To_Date_.tsv \ | clickhouse-local --table='input' --input-format='TSVWithNames' \ --input_format_max_rows_to_read_for_schema_inference=2000 \ --query " WITH (CMPLNT_FR_DT || ' ' || CMPLNT_FR_TM) AS CMPLNT_START, (CMPLNT_TO_DT || ' ' || CMPLNT_TO_TM) AS CMPLNT_END SELECT CMPLNT_NUM AS complaint_number, ADDR_PCT_CD AS precinct, BORO_NM AS borough, parseDateTime64BestEffort(CMPLNT_START) AS complaint_begin, parseDateTime64BestEffortOrNull(CMPLNT_END) AS complaint_end, CRM_ATPT_CPTD_CD AS was_crime_completed, HADEVELOPT AS housing_authority_development, HOUSING_PSA AS housing_level_code, JURISDICTION_CODE AS jurisdiction_code, JURIS_DESC AS jurisdiction, KY_CD AS offense_code, LAW_CAT_CD AS offense_level, LOC_OF_OCCUR_DESC AS location_descriptor, OFNS_DESC AS offense_description, PARKS_NM AS park_name, PATROL_BORO AS patrol_borough, PD_CD, PD_DESC, PREM_TYP_DESC AS location_type, toDate(parseDateTimeBestEffort(RPT_DT)) AS date_reported, STATION_NAME AS transit_station, SUSP_AGE_GROUP AS suspect_age_group, SUSP_RACE AS suspect_race, SUSP_SEX AS suspect_sex, TRANSIT_DISTRICT AS transit_district, VIC_AGE_GROUP AS victim_age_group, VIC_RACE AS victim_race, VIC_SEX AS victim_sex, X_COORD_CD AS NY_x_coordinate, Y_COORD_CD AS NY_y_coordinate, Latitude, Longitude FROM input" \ | clickhouse-client --query='INSERT INTO NYPD_Complaint FORMAT TSV' Validate the data {#validate-data} :::note The dataset changes once or more per year, your counts may not match what is in this document. ::: Query: sql SELECT count() FROM NYPD_Complaint Result: ```text ┌─count()─┐ │ 208993 │ └─────────┘ 1 row in set. Elapsed: 0.001 sec. ``` The size of the dataset in ClickHouse is just 12% of the original TSV file, compare the size of the original TSV file with the size of the table: Query: sql SELECT formatReadableSize(total_bytes) FROM system.tables WHERE name = 'NYPD_Complaint' Result:
{"source_file": "nypd_complaint_data.md"}
[ 0.10654733330011368, -0.039788879454135895, -0.005871368572115898, 0.03956298902630806, -0.036755163222551346, 0.02438380941748619, -0.0024783669505268335, 0.06402114778757095, -0.11975692957639694, 0.1090444028377533, 0.0835350900888443, -0.11310610920190811, 0.0009759668027982116, -0.047...
83dc7f03-dbbb-42f8-b5f6-5011f23ca1a7
Query: sql SELECT formatReadableSize(total_bytes) FROM system.tables WHERE name = 'NYPD_Complaint' Result: text ┌─formatReadableSize(total_bytes)─┐ │ 8.63 MiB │ └─────────────────────────────────┘ Run some queries {#run-queries} Query 1. Compare the number of complaints by month {#query-1-compare-the-number-of-complaints-by-month} Query: sql SELECT dateName('month', date_reported) AS month, count() AS complaints, bar(complaints, 0, 50000, 80) FROM NYPD_Complaint GROUP BY month ORDER BY complaints DESC Result: ```response Query id: 7fbd4244-b32a-4acf-b1f3-c3aa198e74d9 ┌─month─────┬─complaints─┬─bar(count(), 0, 50000, 80)───────────────────────────────┐ │ March │ 34536 │ ███████████████████████████████████████████████████████▎ │ │ May │ 34250 │ ██████████████████████████████████████████████████████▋ │ │ April │ 32541 │ ████████████████████████████████████████████████████ │ │ January │ 30806 │ █████████████████████████████████████████████████▎ │ │ February │ 28118 │ ████████████████████████████████████████████▊ │ │ November │ 7474 │ ███████████▊ │ │ December │ 7223 │ ███████████▌ │ │ October │ 7070 │ ███████████▎ │ │ September │ 6910 │ ███████████ │ │ August │ 6801 │ ██████████▊ │ │ June │ 6779 │ ██████████▋ │ │ July │ 6485 │ ██████████▍ │ └───────────┴────────────┴──────────────────────────────────────────────────────────┘ 12 rows in set. Elapsed: 0.006 sec. Processed 208.99 thousand rows, 417.99 KB (37.48 million rows/s., 74.96 MB/s.) ``` Query 2. Compare total number of complaints by borough {#query-2-compare-total-number-of-complaints-by-borough} Query: sql SELECT borough, count() AS complaints, bar(complaints, 0, 125000, 60) FROM NYPD_Complaint GROUP BY borough ORDER BY complaints DESC Result: ```response Query id: 8cdcdfd4-908f-4be0-99e3-265722a2ab8d ┌─borough───────┬─complaints─┬─bar(count(), 0, 125000, 60)──┐ │ BROOKLYN │ 57947 │ ███████████████████████████▋ │ │ MANHATTAN │ 53025 │ █████████████████████████▍ │ │ QUEENS │ 44875 │ █████████████████████▌ │ │ BRONX │ 44260 │ █████████████████████▏ │ │ STATEN ISLAND │ 8503 │ ████ │ │ (null) │ 383 │ ▏ │ └───────────────┴────────────┴──────────────────────────────┘ 6 rows in set. Elapsed: 0.008 sec. Processed 208.99 thousand rows, 209.43 KB (27.14 million rows/s., 27.20 MB/s.) ``` Next steps {#next-steps}
{"source_file": "nypd_complaint_data.md"}
[ 0.04108757525682449, -0.03609525412321091, -0.041755013167858124, 0.11650273203849792, -0.07896357029676437, -0.027080686762928963, 0.02859528921544552, 0.050829239189624786, -0.04481438547372818, 0.05529511347413063, 0.025654403492808342, -0.005053660832345486, 0.043851230293512344, -0.05...
f634bd86-5348-4944-96ba-d40da3883dc1
6 rows in set. Elapsed: 0.008 sec. Processed 208.99 thousand rows, 209.43 KB (27.14 million rows/s., 27.20 MB/s.) ``` Next steps {#next-steps} A Practical Introduction to Sparse Primary Indexes in ClickHouse discusses the differences in ClickHouse indexing compared to traditional relational databases, how ClickHouse builds and uses a sparse primary index, and indexing best practices.
{"source_file": "nypd_complaint_data.md"}
[ -0.0016501813661307096, -0.021062767133116722, -0.06311820447444916, 0.0019614112097769976, -0.055486466735601425, -0.06903615593910217, -0.026816541329026222, -0.02220340631902218, 0.00030496210092678666, 0.0039020716212689877, -0.0067054508253932, 0.059342190623283386, 0.011705073527991772...
0fd52903-8d68-4205-b631-60acb4c75365
description: 'The TPC-H benchmark data set and queries.' sidebar_label: 'TPC-H' slug: /getting-started/example-datasets/tpch title: 'TPC-H (1999)' doc_type: 'guide' keywords: ['example dataset', 'tpch', 'benchmark', 'sample data', 'performance testing'] A popular benchmark which models the internal data warehouse of a wholesale supplier. The data is stored into a 3rd normal form representation, requiring lots of joins at query runtime. Despite its age and its unrealistic assumption that the data is uniformly and independently distributed, TPC-H remains the most popular OLAP benchmark to date. References TPC-H New TPC Benchmarks for Decision Support and Web Commerce (Poess et. al., 2000) TPC-H Analyzed: Hidden Messages and Lessons Learned from an Influential Benchmark (Boncz et. al.), 2013 Quantifying TPC-H Choke Points and Their Optimizations (Dresseler et. al.), 2020 Data Generation and Import {#data-generation-and-import} First, checkout the TPC-H repository and compile the data generator: bash git clone https://github.com/gregrahn/tpch-kit.git cd tpch-kit/dbgen make Then, generate the data. Parameter -s specifies the scale factor. For example, with -s 100 , 600 million rows are generated for table 'lineitem'. bash ./dbgen -s 100 Detailed table sizes with scale factor 100: | Table | size (in rows) | size (compressed in ClickHouse) | |----------|----------------|---------------------------------| | nation | 25 | 2 kB | | region | 5 | 1 kB | | part | 20.000.000 | 895 MB | | supplier | 1.000.000 | 75 MB | | partsupp | 80.000.000 | 4.37 GB | | customer | 15.000.000 | 1.19 GB | | orders | 150.000.000 | 6.15 GB | | lineitem | 600.000.000 | 26.69 GB | (Compressed sizes in ClickHouse are taken from system.tables.total_bytes and based on below table definitions.) Now create tables in ClickHouse.
{"source_file": "tpch.md"}
[ -0.038450609892606735, 0.023676833137869835, -0.023995859548449516, 0.047085072845220566, -0.006485002115368843, -0.0861130803823471, -0.02402508072555065, 0.04028608649969101, -0.03352779522538185, -0.02445114217698574, 0.010640813037753105, -0.018472721800208092, 0.005328332539647818, -0...
e8c4b93d-86cb-4fd0-a150-466ee892f143
(Compressed sizes in ClickHouse are taken from system.tables.total_bytes and based on below table definitions.) Now create tables in ClickHouse. We stick as closely as possible to the rules of the TPC-H specification: - Primary keys are created only for the columns mentioned in section 1.4.2.2 of the specification. - Substitution parameters were replaced by the values for query validation in sections 2.1.x.4 of the specification. - As per section 1.4.2.1, the table definitions do not use the optional NOT NULL constraints, even if dbgen generates them by default. The performance of SELECT queries in ClickHouse is not affected by the presence or absence of NOT NULL constraints. - As per section 1.3.1, we use ClickHouse's native datatypes (e.g. Int32 , String ) to implement the abstract datatypes mentioned in the specification (e.g. Identifier , Variable text, size N ). The only effect of this is better readability, the SQL-92 datatypes generated by dbgen (e.g. INTEGER , VARCHAR(40) ) would also work in ClickHouse. ```sql CREATE TABLE nation ( n_nationkey Int32, n_name String, n_regionkey Int32, n_comment String) ORDER BY (n_nationkey); CREATE TABLE region ( r_regionkey Int32, r_name String, r_comment String) ORDER BY (r_regionkey); CREATE TABLE part ( p_partkey Int32, p_name String, p_mfgr String, p_brand String, p_type String, p_size Int32, p_container String, p_retailprice Decimal(15,2), p_comment String) ORDER BY (p_partkey); CREATE TABLE supplier ( s_suppkey Int32, s_name String, s_address String, s_nationkey Int32, s_phone String, s_acctbal Decimal(15,2), s_comment String) ORDER BY (s_suppkey); CREATE TABLE partsupp ( ps_partkey Int32, ps_suppkey Int32, ps_availqty Int32, ps_supplycost Decimal(15,2), ps_comment String) ORDER BY (ps_partkey, ps_suppkey); CREATE TABLE customer ( c_custkey Int32, c_name String, c_address String, c_nationkey Int32, c_phone String, c_acctbal Decimal(15,2), c_mktsegment String, c_comment String) ORDER BY (c_custkey); CREATE TABLE orders ( o_orderkey Int32, o_custkey Int32, o_orderstatus String, o_totalprice Decimal(15,2), o_orderdate Date, o_orderpriority String, o_clerk String, o_shippriority Int32, o_comment String) ORDER BY (o_orderkey); -- The following is an alternative order key which is not compliant with the official TPC-H rules but recommended by sec. 4.5 in -- "Quantifying TPC-H Choke Points and Their Optimizations": -- ORDER BY (o_orderdate, o_orderkey);
{"source_file": "tpch.md"}
[ 0.004792928695678711, -0.01688285358250141, -0.054498717188835144, 0.017107542604207993, -0.06425473093986511, -0.047692663967609406, 0.044633299112319946, -0.010094687342643738, -0.06223031133413315, 0.023759083822369576, 0.01680004596710205, -0.03705354407429695, 0.06989103555679321, -0....
82e28afb-0c7f-4b14-8f6e-b3176ea054e6
CREATE TABLE lineitem ( l_orderkey Int32, l_partkey Int32, l_suppkey Int32, l_linenumber Int32, l_quantity Decimal(15,2), l_extendedprice Decimal(15,2), l_discount Decimal(15,2), l_tax Decimal(15,2), l_returnflag String, l_linestatus String, l_shipdate Date, l_commitdate Date, l_receiptdate Date, l_shipinstruct String, l_shipmode String, l_comment String) ORDER BY (l_orderkey, l_linenumber); -- The following is an alternative order key which is not compliant with the official TPC-H rules but recommended by sec. 4.5 in -- "Quantifying TPC-H Choke Points and Their Optimizations": -- ORDER BY (l_shipdate, l_orderkey, l_linenumber); ``` The data can be imported as follows: bash clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO nation FORMAT CSV" < nation.tbl clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO region FORMAT CSV" < region.tbl clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO part FORMAT CSV" < part.tbl clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO supplier FORMAT CSV" < supplier.tbl clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO partsupp FORMAT CSV" < partsupp.tbl clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO customer FORMAT CSV" < customer.tbl clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO orders FORMAT CSV" < orders.tbl clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO lineitem FORMAT CSV" < lineitem.tbl :::note Instead of using tpch-kit and generating the tables by yourself, you can alternatively import the data from a public S3 bucket. Make sure to create empty tables first using above CREATE statements.
{"source_file": "tpch.md"}
[ 0.01082843728363514, 0.038666948676109314, -0.046256281435489655, -0.02712620422244072, -0.037147555500268936, 0.022122705355286598, -0.0047926525585353374, 0.044805869460105896, -0.04464856907725334, 0.07766629010438919, 0.0700266882777214, -0.08975730836391449, 0.00677818339318037, -0.05...
e37b9c0f-9b63-4e89-b177-8000fb6eaa99
```sql -- Scaling factor 1 INSERT INTO nation SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/1/nation.tbl', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO region SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/1/region.tbl', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO part SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/1/part.tbl', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO supplier SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/1/supplier.tbl', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO partsupp SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/1/partsupp.tbl', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO customer SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/1/customer.tbl', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO orders SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/1/orders.tbl', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO lineitem SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/1/lineitem.tbl', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1;
{"source_file": "tpch.md"}
[ 0.062402427196502686, -0.005217408295720816, -0.06328154355287552, 0.03220656141638756, 0.05134862661361694, 0.005528301000595093, -0.07291564345359802, -0.06694798916578293, -0.003937759902328253, -0.006077487021684647, 0.03542609140276909, -0.09888162463903427, 0.05926118791103363, -0.10...
7948c77e-114d-4702-842e-29a9d7383161
-- Scaling factor 100 INSERT INTO nation SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/100/nation.tbl.gz', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO region SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/100/region.tbl.gz', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO part SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/100/part.tbl.gz', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO supplier SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/100/supplier.tbl.gz', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO partsupp SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/100/partsupp.tbl.gz', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO customer SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/100/customer.tbl.gz', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO orders SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/100/orders.tbl.gz', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; INSERT INTO lineitem SELECT * FROM s3('https://clickhouse-datasets.s3.amazonaws.com/h/100/lineitem.tbl.gz', NOSIGN, CSV) SETTINGS format_csv_delimiter = '|', input_format_defaults_for_omitted_fields = 1, input_format_csv_empty_as_default = 1; ```` ::: Queries {#queries} :::note Setting join_use_nulls should be enabled to produce correct results according to SQL standard. ::: :::note Some TPC-H queries query use correlated subqueries which are available since v25.8. Please use at least this ClickHouse version to run the queries. In ClickHouse versions 25.5, 25.6, 25.7, it is necessary to set additionally: sql SET allow_experimental_correlated_subqueries = 1; ::: The queries are generated by ./qgen -s <scaling_factor> . Example queries for s = 100 below: Correctness The result of the queries agrees with the official results unless mentioned otherwise. To verify, generate a TPC-H database with scale factor = 1 ( dbgen , see above) and compare with the expected results in tpch-kit . Q1
{"source_file": "tpch.md"}
[ 0.06192854046821594, 0.012879564426839352, -0.054224371910095215, -0.0013162520481273532, 0.05990635231137276, 0.006002479698508978, -0.06864380091428757, -0.06649895012378693, -0.014892050996422768, 0.010609031654894352, 0.03497421368956566, -0.10069629549980164, 0.05430033430457115, -0.0...
08330727-4721-4e6f-ad5f-e1750bfe0174
Q1 sql SELECT l_returnflag, l_linestatus, sum(l_quantity) AS sum_qty, sum(l_extendedprice) AS sum_base_price, sum(l_extendedprice * (1 - l_discount)) AS sum_disc_price, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) AS sum_charge, avg(l_quantity) AS avg_qty, avg(l_extendedprice) AS avg_price, avg(l_discount) AS avg_disc, count(*) AS count_order FROM lineitem WHERE l_shipdate <= DATE '1998-12-01' - INTERVAL '90' DAY GROUP BY l_returnflag, l_linestatus ORDER BY l_returnflag, l_linestatus; Q2 sql SELECT s_acctbal, s_name, n_name, p_partkey, p_mfgr, s_address, s_phone, s_comment FROM part, supplier, partsupp, nation, region WHERE p_partkey = ps_partkey AND s_suppkey = ps_suppkey AND p_size = 15 AND p_type LIKE '%BRASS' AND s_nationkey = n_nationkey AND n_regionkey = r_regionkey AND r_name = 'EUROPE' AND ps_supplycost = ( SELECT min(ps_supplycost) FROM partsupp, supplier, nation, region WHERE p_partkey = ps_partkey AND s_suppkey = ps_suppkey AND s_nationkey = n_nationkey AND n_regionkey = r_regionkey AND r_name = 'EUROPE' ) ORDER BY s_acctbal DESC, n_name, s_name, p_partkey; Q3 sql SELECT l_orderkey, sum(l_extendedprice * (1 - l_discount)) AS revenue, o_orderdate, o_shippriority FROM customer, orders, lineitem WHERE c_mktsegment = 'BUILDING' AND c_custkey = o_custkey AND l_orderkey = o_orderkey AND o_orderdate < DATE '1995-03-15' AND l_shipdate > DATE '1995-03-15' GROUP BY l_orderkey, o_orderdate, o_shippriority ORDER BY revenue DESC, o_orderdate; Q4 sql SELECT o_orderpriority, count(*) AS order_count FROM orders WHERE o_orderdate >= DATE '1993-07-01' AND o_orderdate < DATE '1993-07-01' + INTERVAL '3' MONTH AND EXISTS ( SELECT * FROM lineitem WHERE l_orderkey = o_orderkey AND l_commitdate < l_receiptdate ) GROUP BY o_orderpriority ORDER BY o_orderpriority; Q5 sql SELECT n_name, sum(l_extendedprice * (1 - l_discount)) AS revenue FROM customer, orders, lineitem, supplier, nation, region WHERE c_custkey = o_custkey AND l_orderkey = o_orderkey AND l_suppkey = s_suppkey AND c_nationkey = s_nationkey AND s_nationkey = n_nationkey AND n_regionkey = r_regionkey AND r_name = 'ASIA' AND o_orderdate >= DATE '1994-01-01' AND o_orderdate < DATE '1994-01-01' + INTERVAL '1' year GROUP BY n_name ORDER BY revenue DESC; Q6
{"source_file": "tpch.md"}
[ 0.022662056609988213, 0.019026555120944977, 0.027455978095531464, 0.09603064507246017, -0.09031089395284653, 0.0350167416036129, 0.040536701679229736, 0.07030709087848663, -0.011425334960222244, 0.0013529392890632153, 0.12385080754756927, -0.07872532308101654, 0.06776956468820572, -0.06429...
185e3f8f-2327-430f-a697-4c342bf6f246
Q6 sql SELECT sum(l_extendedprice * l_discount) AS revenue FROM lineitem WHERE l_shipdate >= DATE '1994-01-01' AND l_shipdate < DATE '1994-01-01' + INTERVAL '1' year AND l_discount BETWEEN 0.06 - 0.01 AND 0.06 + 0.01 AND l_quantity < 24; ::::note As of February 2025, the query does not work out-of-the box due to a bug with Decimal addition. Corresponding issue: https://github.com/ClickHouse/ClickHouse/issues/70136 This alternative formulation works and was verified to return the reference results. sql SELECT sum(l_extendedprice * l_discount) AS revenue FROM lineitem WHERE l_shipdate >= DATE '1994-01-01' AND l_shipdate < DATE '1994-01-01' + INTERVAL '1' year AND l_discount BETWEEN 0.05 AND 0.07 AND l_quantity < 24; :::: Q7 sql SELECT supp_nation, cust_nation, l_year, sum(volume) AS revenue FROM ( SELECT n1.n_name AS supp_nation, n2.n_name AS cust_nation, extract(year FROM l_shipdate) AS l_year, l_extendedprice * (1 - l_discount) AS volume FROM supplier, lineitem, orders, customer, nation n1, nation n2 WHERE s_suppkey = l_suppkey AND o_orderkey = l_orderkey AND c_custkey = o_custkey AND s_nationkey = n1.n_nationkey AND c_nationkey = n2.n_nationkey AND ( (n1.n_name = 'FRANCE' AND n2.n_name = 'GERMANY') OR (n1.n_name = 'GERMANY' AND n2.n_name = 'FRANCE') ) AND l_shipdate BETWEEN DATE '1995-01-01' AND DATE '1996-12-31' ) AS shipping GROUP BY supp_nation, cust_nation, l_year ORDER BY supp_nation, cust_nation, l_year; Q8 sql SELECT o_year, sum(CASE WHEN nation = 'BRAZIL' THEN volume ELSE 0 END) / sum(volume) AS mkt_share FROM ( SELECT extract(year FROM o_orderdate) AS o_year, l_extendedprice * (1 - l_discount) AS volume, n2.n_name AS nation FROM part, supplier, lineitem, orders, customer, nation n1, nation n2, region WHERE p_partkey = l_partkey AND s_suppkey = l_suppkey AND l_orderkey = o_orderkey AND o_custkey = c_custkey AND c_nationkey = n1.n_nationkey AND n1.n_regionkey = r_regionkey AND r_name = 'AMERICA' AND s_nationkey = n2.n_nationkey AND o_orderdate BETWEEN DATE '1995-01-01' AND DATE '1996-12-31' AND p_type = 'ECONOMY ANODIZED STEEL' ) AS all_nations GROUP BY o_year ORDER BY o_year; Q9
{"source_file": "tpch.md"}
[ -0.029026541858911514, 0.005299244541674852, 0.030519383028149605, 0.0401700995862484, -0.0678105503320694, 0.0447077676653862, -0.006503366399556398, 0.04511905089020729, -0.01735987886786461, 0.01336262933909893, 0.06132085621356964, -0.1133216917514801, -0.03392418846487999, -0.01457948...
42f25b47-8f83-4650-bb7e-a2f2a69b6777
Q9 sql SELECT nation, o_year, sum(amount) AS sum_profit FROM ( SELECT n_name AS nation, extract(year FROM o_orderdate) AS o_year, l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity AS amount FROM part, supplier, lineitem, partsupp, orders, nation WHERE s_suppkey = l_suppkey AND ps_suppkey = l_suppkey AND ps_partkey = l_partkey AND p_partkey = l_partkey AND o_orderkey = l_orderkey AND s_nationkey = n_nationkey AND p_name LIKE '%green%' ) AS profit GROUP BY nation, o_year ORDER BY nation, o_year DESC; Q10 sql SELECT c_custkey, c_name, sum(l_extendedprice * (1 - l_discount)) AS revenue, c_acctbal, n_name, c_address, c_phone, c_comment FROM customer, orders, lineitem, nation WHERE c_custkey = o_custkey AND l_orderkey = o_orderkey AND o_orderdate >= DATE '1993-10-01' AND o_orderdate < DATE '1993-10-01' + INTERVAL '3' MONTH AND l_returnflag = 'R' AND c_nationkey = n_nationkey GROUP BY c_custkey, c_name, c_acctbal, c_phone, n_name, c_address, c_comment ORDER BY revenue DESC; Q11 sql SELECT ps_partkey, sum(ps_supplycost * ps_availqty) AS value FROM partsupp, supplier, nation WHERE ps_suppkey = s_suppkey AND s_nationkey = n_nationkey AND n_name = 'GERMANY' GROUP BY ps_partkey HAVING sum(ps_supplycost * ps_availqty) > ( SELECT sum(ps_supplycost * ps_availqty) * 0.0001 FROM partsupp, supplier, nation WHERE ps_suppkey = s_suppkey AND s_nationkey = n_nationkey AND n_name = 'GERMANY' ) ORDER BY value DESC; Q12 sql SELECT l_shipmode, sum(CASE WHEN o_orderpriority = '1-URGENT' OR o_orderpriority = '2-HIGH' THEN 1 ELSE 0 END) AS high_line_count, sum(CASE WHEN o_orderpriority <> '1-URGENT' AND o_orderpriority <> '2-HIGH' THEN 1 ELSE 0 END) AS low_line_count FROM orders, lineitem WHERE o_orderkey = l_orderkey AND l_shipmode IN ('MAIL', 'SHIP') AND l_commitdate < l_receiptdate AND l_shipdate < l_commitdate AND l_receiptdate >= DATE '1994-01-01' AND l_receiptdate < DATE '1994-01-01' + INTERVAL '1' year GROUP BY l_shipmode ORDER BY l_shipmode; Q13 sql SELECT c_count, count(*) AS custdist FROM ( SELECT c_custkey, count(o_orderkey) AS c_count FROM customer LEFT OUTER JOIN orders ON c_custkey = o_custkey AND o_comment NOT LIKE '%special%requests%' GROUP BY c_custkey ) AS c_orders GROUP BY c_count ORDER BY custdist DESC, c_count DESC; Q14
{"source_file": "tpch.md"}
[ -0.03436768427491188, 0.05667426437139511, 0.034859638661146164, 0.011780787259340286, -0.04586043581366539, 0.051492840051651, 0.025562843307852745, 0.060424696654081345, -0.028026960790157318, -0.008705968037247658, 0.09913023561239243, -0.0814574658870697, 0.010921208187937737, -0.04611...
57eda407-20df-42d7-8e3c-bbcd33681167
Q14 sql SELECT 100.00 * sum(CASE WHEN p_type LIKE 'PROMO%' THEN l_extendedprice * (1 - l_discount) ELSE 0 END) / sum(l_extendedprice * (1 - l_discount)) AS promo_revenue FROM lineitem, part WHERE l_partkey = p_partkey AND l_shipdate >= DATE '1995-09-01' AND l_shipdate < DATE '1995-09-01' + INTERVAL '1' MONTH; Q15 ```sql CREATE VIEW revenue0 (supplier_no, total_revenue) AS SELECT l_suppkey, sum(l_extendedprice * (1 - l_discount)) FROM lineitem WHERE l_shipdate >= DATE '1996-01-01' AND l_shipdate < DATE '1996-01-01' + INTERVAL '3' MONTH GROUP BY l_suppkey; SELECT s_suppkey, s_name, s_address, s_phone, total_revenue FROM supplier, revenue0 WHERE s_suppkey = supplier_no AND total_revenue = ( SELECT max(total_revenue) FROM revenue0 ) ORDER BY s_suppkey; DROP VIEW revenue0; ``` Q16 sql SELECT p_brand, p_type, p_size, count(DISTINCT ps_suppkey) AS supplier_cnt FROM partsupp, part WHERE p_partkey = ps_partkey AND p_brand <> 'Brand#45' AND p_type NOT LIKE 'MEDIUM POLISHED%' AND p_size IN (49, 14, 23, 45, 19, 3, 36, 9) AND ps_suppkey NOT IN ( SELECT s_suppkey FROM supplier WHERE s_comment LIKE '%Customer%Complaints%' ) GROUP BY p_brand, p_type, p_size ORDER BY supplier_cnt DESC, p_brand, p_type, p_size; Q17 sql SELECT sum(l_extendedprice) / 7.0 AS avg_yearly FROM lineitem, part WHERE p_partkey = l_partkey AND p_brand = 'Brand#23' AND p_container = 'MED BOX' AND l_quantity < ( SELECT 0.2 * avg(l_quantity) FROM lineitem WHERE l_partkey = p_partkey ); Q18 sql SELECT c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity) FROM customer, orders, lineitem WHERE o_orderkey IN ( SELECT l_orderkey FROM lineitem GROUP BY l_orderkey HAVING sum(l_quantity) > 300 ) AND c_custkey = o_custkey AND o_orderkey = l_orderkey GROUP BY c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice ORDER BY o_totalprice DESC, o_orderdate; Q19
{"source_file": "tpch.md"}
[ -0.0046703931875526905, 0.01307822484523058, 0.06133678928017616, -0.002201720606535673, -0.09599809348583221, 0.032401055097579956, 0.032384663820266724, 0.06055030971765518, -0.058292094618082047, 0.023891661316156387, 0.11948640644550323, -0.06326135247945786, 0.04099459946155548, -0.02...
ced81d7f-7509-4339-8e92-384a704b327f
Q19 sql SELECT sum(l_extendedprice * (1 - l_discount)) AS revenue FROM lineitem, part WHERE ( p_partkey = l_partkey AND p_brand = 'Brand#12' AND p_container IN ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG') AND l_quantity >= 1 AND l_quantity <= 1 + 10 AND p_size BETWEEN 1 AND 5 AND l_shipmode IN ('AIR', 'AIR REG') AND l_shipinstruct = 'DELIVER IN PERSON' ) OR ( p_partkey = l_partkey AND p_brand = 'Brand#23' AND p_container IN ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK') AND l_quantity >= 10 AND l_quantity <= 10 + 10 AND p_size BETWEEN 1 AND 10 AND l_shipmode IN ('AIR', 'AIR REG') AND l_shipinstruct = 'DELIVER IN PERSON' ) OR ( p_partkey = l_partkey AND p_brand = 'Brand#34' AND p_container IN ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG') AND l_quantity >= 20 AND l_quantity <= 20 + 10 AND p_size BETWEEN 1 AND 15 AND l_shipmode IN ('AIR', 'AIR REG') AND l_shipinstruct = 'DELIVER IN PERSON' ); Q20 sql SELECT s_name, s_address FROM supplier, nation WHERE s_suppkey in ( SELECT ps_suppkey FROM partsupp WHERE ps_partkey in ( SELECT p_partkey FROM part WHERE p_name LIKE 'forest%' ) AND ps_availqty > ( SELECT 0.5 * sum(l_quantity) FROM lineitem WHERE l_partkey = ps_partkey AND l_suppkey = ps_suppkey AND l_shipdate >= DATE '1994-01-01' AND l_shipdate < DATE '1994-01-01' + INTERVAL '1' year ) ) AND s_nationkey = n_nationkey AND n_name = 'CANADA' ORDER BY s_name; Q21 sql SELECT s_name, count(*) AS numwait FROM supplier, lineitem l1, orders, nation WHERE s_suppkey = l1.l_suppkey AND o_orderkey = l1.l_orderkey AND o_orderstatus = 'F' AND l1.l_receiptdate > l1.l_commitdate AND EXISTS ( SELECT * FROM lineitem l2 WHERE l2.l_orderkey = l1.l_orderkey AND l2.l_suppkey <> l1.l_suppkey ) AND NOT EXISTS ( SELECT * FROM lineitem l3 WHERE l3.l_orderkey = l1.l_orderkey AND l3.l_suppkey <> l1.l_suppkey AND l3.l_receiptdate > l3.l_commitdate ) AND s_nationkey = n_nationkey AND n_name = 'SAUDI ARABIA' GROUP BY s_name ORDER BY numwait DESC, s_name; Q22
{"source_file": "tpch.md"}
[ 0.034780144691467285, 0.07008936256170273, 0.044647928327322006, 0.040129389613866806, -0.0065474733710289, 0.041920412331819534, 0.08922377228736877, 0.08794588595628738, -0.059494663029909134, -0.017347775399684906, 0.08872837573289871, -0.05071441829204559, 0.019272925332188606, -0.0104...
70ae69a4-ed27-4bd1-9b6f-f8bc28bb07f3
Q22 sql SELECT cntrycode, count(*) AS numcust, sum(c_acctbal) AS totacctbal FROM ( SELECT substring(c_phone FROM 1 for 2) AS cntrycode, c_acctbal FROM customer WHERE substring(c_phone FROM 1 for 2) in ('13', '31', '23', '29', '30', '18', '17') AND c_acctbal > ( SELECT avg(c_acctbal) FROM customer WHERE c_acctbal > 0.00 AND substring(c_phone FROM 1 for 2) in ('13', '31', '23', '29', '30', '18', '17') ) AND NOT EXISTS ( SELECT * FROM orders WHERE o_custkey = c_custkey ) ) AS custsale GROUP BY cntrycode ORDER BY cntrycode;
{"source_file": "tpch.md"}
[ -0.0010020066983997822, 0.029217945411801338, 0.04127362370491028, 0.02323329448699951, -0.15162932872772217, 0.056462422013282776, 0.08022882789373398, 0.02829086221754551, 0.05546952411532402, -0.02479126863181591, 0.119288370013237, -0.09318237751722336, 0.1046714335680008, -0.101698152...
9f0f3058-ae16-46c8-8314-813856bb6b56
description: 'Dataset with over 100 million records containing information about places on a map, such as shops, restaurants, parks, playgrounds, and monuments.' sidebar_label: 'Foursquare places' slug: /getting-started/example-datasets/foursquare-places title: 'Foursquare places' keywords: ['visualizing'] doc_type: 'guide' import Image from '@theme/IdealImage'; import visualization_1 from '@site/static/images/getting-started/example-datasets/visualization_1.png'; import visualization_2 from '@site/static/images/getting-started/example-datasets/visualization_2.png'; import visualization_3 from '@site/static/images/getting-started/example-datasets/visualization_3.png'; import visualization_4 from '@site/static/images/getting-started/example-datasets/visualization_4.png'; Dataset {#dataset} This dataset by Foursquare is available to download and to use for free under the Apache 2.0 license. It contains over 100 million records of commercial points-of-interest (POI), such as shops, restaurants, parks, playgrounds, and monuments. It also includes additional metadata about those places, such as categories and social media information. Data exploration {#data-exploration} For exploring the data we'll use clickhouse-local , a small command-line tool that provides the full ClickHouse engine, although you could also use ClickHouse Cloud, clickhouse-client or even chDB . Run the following query to select the data from the s3 bucket where the data is stored: sql title="Query" SELECT * FROM s3('s3://fsq-os-places-us-east-1/release/dt=2025-04-08/places/parquet/*') LIMIT 1 response title="Response" Row 1: ────── fsq_place_id: 4e1ef76cae60cd553dec233f name: @VirginAmerica In-flight Via @Gogo latitude: 37.62120111687914 longitude: -122.39003793803701 address: ᴺᵁᴸᴸ locality: ᴺᵁᴸᴸ region: ᴺᵁᴸᴸ postcode: ᴺᵁᴸᴸ admin_region: ᴺᵁᴸᴸ post_town: ᴺᵁᴸᴸ po_box: ᴺᵁᴸᴸ country: US date_created: 2011-07-14 date_refreshed: 2018-07-05 date_closed: 2018-07-05 tel: ᴺᵁᴸᴸ website: ᴺᵁᴸᴸ email: ᴺᵁᴸᴸ facebook_id: ᴺᵁᴸᴸ instagram: ᴺᵁᴸᴸ twitter: ᴺᵁᴸᴸ fsq_category_ids: ['4bf58dd8d48988d1f7931735'] fsq_category_labels: ['Travel and Transportation > Transport Hub > Airport > Plane'] placemaker_url: https://foursquare.com/placemakers/review-place/4e1ef76cae60cd553dec233f geom: �^��a�^@Bσ��� bbox: (-122.39003793803701,37.62120111687914,-122.39003793803701,37.62120111687914) We see that quite a few fields have ᴺᵁᴸᴸ , so we can add some additional conditions to our query to get back more usable data: sql title="Query" SELECT * FROM s3('s3://fsq-os-places-us-east-1/release/dt=2025-04-08/places/parquet/*') WHERE address IS NOT NULL AND postcode IS NOT NULL AND instagram IS NOT NULL LIMIT 1
{"source_file": "foursquare-os-places.md"}
[ 0.08012323826551437, -0.008485829457640648, 0.03313462436199188, 0.007512710057199001, 0.09225673973560333, -0.0293867327272892, -0.03611450642347336, 0.034847162663936615, -0.05689001828432083, 0.011541972868144512, 0.048036638647317886, -0.005656319670379162, 0.07483812421560287, -0.0371...
d9a37d66-4197-4285-917d-0ffa8fd68e5a
sql title="Query" SELECT * FROM s3('s3://fsq-os-places-us-east-1/release/dt=2025-04-08/places/parquet/*') WHERE address IS NOT NULL AND postcode IS NOT NULL AND instagram IS NOT NULL LIMIT 1 response Row 1: ────── fsq_place_id: 59b2c754b54618784f259654 name: Villa 722 latitude: ᴺᵁᴸᴸ longitude: ᴺᵁᴸᴸ address: Gijzenveldstraat 75 locality: Zutendaal region: Limburg postcode: 3690 admin_region: ᴺᵁᴸᴸ post_town: ᴺᵁᴸᴸ po_box: ᴺᵁᴸᴸ country: ᴺᵁᴸᴸ date_created: 2017-09-08 date_refreshed: 2020-01-25 date_closed: ᴺᵁᴸᴸ tel: ᴺᵁᴸᴸ website: https://www.landal.be email: ᴺᵁᴸᴸ facebook_id: 522698844570949 -- 522.70 trillion instagram: landalmooizutendaal twitter: landalzdl fsq_category_ids: ['56aa371be4b08b9a8d5734e1'] fsq_category_labels: ['Travel and Transportation > Lodging > Vacation Rental'] placemaker_url: https://foursquare.com/placemakers/review-place/59b2c754b54618784f259654 geom: ᴺᵁᴸᴸ bbox: (NULL,NULL,NULL,NULL) Run the following query to view the automatically inferred schema of the data using the DESCRIBE : sql title="Query" DESCRIBE s3('s3://fsq-os-places-us-east-1/release/dt=2025-04-08/places/parquet/*')
{"source_file": "foursquare-os-places.md"}
[ 0.043771736323833466, 0.009549063630402088, -0.041046008467674255, -0.010060430504381657, 0.05076955631375313, -0.04663104936480522, 0.027760906144976616, -0.06941664218902588, 0.028158431872725487, 0.011233321391046047, 0.11798737943172455, -0.0986613854765892, 0.028825603425502777, -0.10...
3bb2e7bf-e664-4c69-bbe6-7b9f427889ab
sql title="Query" DESCRIBE s3('s3://fsq-os-places-us-east-1/release/dt=2025-04-08/places/parquet/*') response title="Response" ┌─name────────────────┬─type────────────────────────┬ 1. │ fsq_place_id │ Nullable(String) │ 2. │ name │ Nullable(String) │ 3. │ latitude │ Nullable(Float64) │ 4. │ longitude │ Nullable(Float64) │ 5. │ address │ Nullable(String) │ 6. │ locality │ Nullable(String) │ 7. │ region │ Nullable(String) │ 8. │ postcode │ Nullable(String) │ 9. │ admin_region │ Nullable(String) │ 10. │ post_town │ Nullable(String) │ 11. │ po_box │ Nullable(String) │ 12. │ country │ Nullable(String) │ 13. │ date_created │ Nullable(String) │ 14. │ date_refreshed │ Nullable(String) │ 15. │ date_closed │ Nullable(String) │ 16. │ tel │ Nullable(String) │ 17. │ website │ Nullable(String) │ 18. │ email │ Nullable(String) │ 19. │ facebook_id │ Nullable(Int64) │ 20. │ instagram │ Nullable(String) │ 21. │ twitter │ Nullable(String) │ 22. │ fsq_category_ids │ Array(Nullable(String)) │ 23. │ fsq_category_labels │ Array(Nullable(String)) │ 24. │ placemaker_url │ Nullable(String) │ 25. │ geom │ Nullable(String) │ 26. │ bbox │ Tuple( ↴│ │ │↳ xmin Nullable(Float64),↴│ │ │↳ ymin Nullable(Float64),↴│ │ │↳ xmax Nullable(Float64),↴│ │ │↳ ymax Nullable(Float64)) │ └─────────────────────┴─────────────────────────────┘ Loading the data into ClickHouse {#loading-the-data} If you'd like to persist the data on disk, you can use clickhouse-server or ClickHouse Cloud. To create the table, run the following command:
{"source_file": "foursquare-os-places.md"}
[ 0.03446707874536514, -0.04216989129781723, -0.03792319446802139, 0.010143432766199112, 0.003078473499044776, -0.040512148290872574, -0.011734913103282452, -0.046230100095272064, -0.025042129680514336, 0.051945023238658905, 0.011895379051566124, -0.05490168556571007, 0.02509121410548687, -0...
814d0f0e-218c-4adb-bb87-8cfd2deb12c9
If you'd like to persist the data on disk, you can use clickhouse-server or ClickHouse Cloud. To create the table, run the following command: sql title="Query" CREATE TABLE foursquare_mercator ( fsq_place_id String, name String, latitude Float64, longitude Float64, address String, locality String, region LowCardinality(String), postcode LowCardinality(String), admin_region LowCardinality(String), post_town LowCardinality(String), po_box LowCardinality(String), country LowCardinality(String), date_created Nullable(Date), date_refreshed Nullable(Date), date_closed Nullable(Date), tel String, website String, email String, facebook_id String, instagram String, twitter String, fsq_category_ids Array(String), fsq_category_labels Array(String), placemaker_url String, geom String, bbox Tuple( xmin Nullable(Float64), ymin Nullable(Float64), xmax Nullable(Float64), ymax Nullable(Float64) ), category LowCardinality(String) ALIAS fsq_category_labels[1], mercator_x UInt32 MATERIALIZED 0xFFFFFFFF * ((longitude + 180) / 360), mercator_y UInt32 MATERIALIZED 0xFFFFFFFF * ((1 / 2) - ((log(tan(((latitude + 90) / 360) * pi())) / 2) / pi())), INDEX idx_x mercator_x TYPE minmax, INDEX idx_y mercator_y TYPE minmax ) ORDER BY mortonEncode(mercator_x, mercator_y) Take note of the use of the LowCardinality data type for several columns which changes the internal representation of the data types to be dictionary-encoded. Operating with dictionary encoded data significantly increases the performance of SELECT queries for many applications. Additionally, two UInt32 MATERIALIZED columns, mercator_x and mercator_y are created that map the lat/lon coordinates to the Web Mercator projection for easier segmentation of the map into tiles: sql mercator_x UInt32 MATERIALIZED 0xFFFFFFFF * ((longitude + 180) / 360), mercator_y UInt32 MATERIALIZED 0xFFFFFFFF * ((1 / 2) - ((log(tan(((latitude + 90) / 360) * pi())) / 2) / pi())), Let's break down what is happening above for each column. mercator_x This column converts a longitude value into an X coordinate in the Mercator projection: longitude + 180 shifts the longitude range from [-180, 180] to [0, 360] Dividing by 360 normalizes this to a value between 0 and 1 Multiplying by 0xFFFFFFFF (hex for maximum 32-bit unsigned integer) scales this normalized value to the full range of a 32-bit integer mercator_y This column converts a latitude value into a Y coordinate in the Mercator projection: latitude + 90 shifts latitude from [-90, 90] to [0, 180] Dividing by 360 and multiplying by pi() converts to radians for the trigonometric functions The log(tan(...)) part is the core of the Mercator projection formula multiplying by 0xFFFFFFFF scales to the full 32-bit integer range
{"source_file": "foursquare-os-places.md"}
[ 0.12050250917673111, -0.0810551568865776, -0.06856448203325272, 0.010630851611495018, 0.009600928984582424, 0.019010141491889954, -0.04521920904517174, 0.012417173944413662, -0.0022880027536302805, 0.03614840656518936, 0.06825318932533264, -0.0010436277370899916, 0.013368598185479641, -0.0...
77b8ce64-b999-4ac0-a6df-a9cb34cdd4e8
The log(tan(...)) part is the core of the Mercator projection formula multiplying by 0xFFFFFFFF scales to the full 32-bit integer range Specifying MATERIALIZED makes sure that ClickHouse calculates the values for these columns when we INSERT the data, without having to specify these columns (which are not part of the original data schema) in the `INSERT statement. The table is ordered by mortonEncode(mercator_x, mercator_y) which produces a Z-order space-filling curve of mercator_x , mercator_y in order to significantly improve geospatial query performance. This Z-order curve ordering ensures data is physically organized by spatial proximity: sql ORDER BY mortonEncode(mercator_x, mercator_y) Two minmax indices are also created for faster search: sql INDEX idx_x mercator_x TYPE minmax, INDEX idx_y mercator_y TYPE minmax As you can see, ClickHouse has absolutely everything you need for real-time mapping applications! Run the following query to load the data: sql INSERT INTO foursquare_mercator SELECT * FROM s3('s3://fsq-os-places-us-east-1/release/dt=2025-04-08/places/parquet/*') Visualizing the data {#data-visualization} To see what's possible with this dataset, check out adsb.exposed . adsb.exposed was originally built by co-founder and CTO Alexey Milovidov to visualize ADS-B (Automatic Dependent Surveillance-Broadcast) flight data, which is 1000x times larger. During a company hackathon Alexey added the Foursquare data to the tool. Some of our favourite visualizations are produced here below for you to enjoy.
{"source_file": "foursquare-os-places.md"}
[ 0.06618072837591171, 0.0017784851370379329, -0.0006933873519301414, 0.009705707430839539, 0.01328778825700283, -0.05937836319208145, -0.018912335857748985, -0.0005987315089441836, 0.06625613570213318, -0.017665913328528404, 0.020444225519895554, 0.039754025638103485, 0.030925609171390533, ...
37c6b6c2-de96-4169-a03d-cedbb0253e16
description: 'A collection of dislikes of YouTube videos.' sidebar_label: 'YouTube dislikes' slug: /getting-started/example-datasets/youtube-dislikes title: 'YouTube dataset of dislikes' doc_type: 'guide' keywords: ['example dataset', 'youtube', 'sample data', 'video analytics', 'dislikes'] In November of 2021, YouTube removed the public dislike count from all of its videos. While creators can still see the number of dislikes, viewers can only see how many likes a video has received. :::important The dataset has over 4.55 billion records, so be careful just copying-and-pasting the commands below unless your resources can handle that type of volume. The commands below were executed on a Production instance of ClickHouse Cloud . ::: The data is in a JSON format and can be downloaded from archive.org . We have made this same data available in S3 so that it can be downloaded more efficiently into a ClickHouse Cloud instance. Here are the steps to create a table in ClickHouse Cloud and insert the data. :::note The steps below will easily work on a local install of ClickHouse too. The only change would be to use the s3 function instead of s3cluster (unless you have a cluster configured - in which case change default to the name of your cluster). ::: Step-by-step instructions {#step-by-step-instructions} Data exploration {#data-exploration} Let's see what the data looks like. The s3cluster table function returns a table, so we can DESCRIBE the result: sql DESCRIBE s3( 'https://clickhouse-public-datasets.s3.amazonaws.com/youtube/original/files/*.zst', 'JSONLines' ); ClickHouse infers the following schema from the JSON file:
{"source_file": "youtube-dislikes.md"}
[ -0.03148604556918144, -0.05325914919376373, -0.03117653913795948, -0.020218366757035255, 0.05211303010582924, 0.002536446787416935, 0.034615740180015564, -0.029920315369963646, 0.03825809061527252, 0.02673889324069023, 0.055735696107149124, -0.05291951447725296, 0.07923506945371628, -0.051...
90849fcd-935e-4e2c-8228-366672d10f76
response ┌─name────────────────┬─type───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐ │ id │ Nullable(String) │ │ │ │ │ │ │ fetch_date │ Nullable(String) │ │ │ │ │ │ │ upload_date │ Nullable(String) │ │ │ │ │ │ │ title │ Nullable(String) │ │ │ │ │ │ │ uploader_id │ Nullable(String) │ │ │ │ │ │ │ uploader │ Nullable(String) │ │ │ │ │ │ │ uploader_sub_count │ Nullable(Int64) │ │ │ │ │ │ │ is_age_limit │ Nullable(Bool) │ │ │ │ │ │ │ view_count │ Nullable(Int64) │ │ │ │ │ │ │ like_count │ Nullable(Int64) │ │ │ │ │ │ │ dislike_count │ Nullable(Int64) │ │ │ │ │ │
{"source_file": "youtube-dislikes.md"}
[ -0.059024736285209656, 0.012273004278540611, -0.02760537900030613, 0.03580223023891449, 0.024679679423570633, 0.017860589548945427, -0.020483434200286865, 0.0004459153860807419, -0.05077670142054558, 0.061257828027009964, 0.10194966197013855, -0.039626944810152054, 0.07062436640262604, -0....
b03f9d0a-3c2b-4e82-97bd-9d1dfac6d258
│ is_crawlable │ Nullable(Bool) │ │ │ │ │ │ │ is_live_content │ Nullable(Bool) │ │ │ │ │ │ │ has_subtitles │ Nullable(Bool) │ │ │ │ │ │ │ is_ads_enabled │ Nullable(Bool) │ │ │ │ │ │ │ is_comments_enabled │ Nullable(Bool) │ │ │ │ │ │ │ description │ Nullable(String) │ │ │ │ │ │ │ rich_metadata │ Array(Tuple(call Nullable(String), content Nullable(String), subtitle Nullable(String), title Nullable(String), url Nullable(String))) │ │ │ │ │ │ │ super_titles │ Array(Tuple(text Nullable(String), url Nullable(String))) │ │ │ │ │ │ │ uploader_badges │ Nullable(String) │ │ │ │ │ │ │ video_badges │ Nullable(String) │ │ │ │ │ │ └─────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
{"source_file": "youtube-dislikes.md"}
[ -0.04531856253743172, -0.037470053881406784, -0.019112572073936462, 0.01067714300006628, 0.10448196530342102, 0.009556838311254978, -0.03552395477890968, -0.07736627012491226, -0.09631352871656418, 0.0338115319609642, 0.05938541516661644, 0.006791135296225548, 0.04594297334551811, 0.012192...
07614be9-8852-4679-90f3-d28d90855f03
Create the table {#create-the-table} Based on the inferred schema, we cleaned up the data types and added a primary key. Define the following table: sql CREATE TABLE youtube ( `id` String, `fetch_date` DateTime, `upload_date_str` String, `upload_date` Date, `title` String, `uploader_id` String, `uploader` String, `uploader_sub_count` Int64, `is_age_limit` Bool, `view_count` Int64, `like_count` Int64, `dislike_count` Int64, `is_crawlable` Bool, `has_subtitles` Bool, `is_ads_enabled` Bool, `is_comments_enabled` Bool, `description` String, `rich_metadata` Array(Tuple(call String, content String, subtitle String, title String, url String)), `super_titles` Array(Tuple(text String, url String)), `uploader_badges` String, `video_badges` String ) ENGINE = MergeTree ORDER BY (uploader, upload_date) Insert data {#insert-data} The following command streams the records from the S3 files into the youtube table. :::important This inserts a lot of data - 4.65 billion rows. If you do not want the entire dataset, simply add a LIMIT clause with the desired number of rows. ::: sql INSERT INTO youtube SETTINGS input_format_null_as_default = 1 SELECT id, parseDateTimeBestEffortUSOrZero(toString(fetch_date)) AS fetch_date, upload_date AS upload_date_str, toDate(parseDateTimeBestEffortUSOrZero(upload_date::String)) AS upload_date, ifNull(title, '') AS title, uploader_id, ifNull(uploader, '') AS uploader, uploader_sub_count, is_age_limit, view_count, like_count, dislike_count, is_crawlable, has_subtitles, is_ads_enabled, is_comments_enabled, ifNull(description, '') AS description, rich_metadata, super_titles, ifNull(uploader_badges, '') AS uploader_badges, ifNull(video_badges, '') AS video_badges FROM s3( 'https://clickhouse-public-datasets.s3.amazonaws.com/youtube/original/files/*.zst', 'JSONLines' ) Some comments about our INSERT command: The parseDateTimeBestEffortUSOrZero function is handy when the incoming date fields may not be in the proper format. If fetch_date does not get parsed properly, it will be set to 0 The upload_date column contains valid dates, but it also contains strings like "4 hours ago" - which is certainly not a valid date. We decided to store the original value in upload_date_str and attempt to parse it with toDate(parseDateTimeBestEffortUSOrZero(upload_date::String)) . If the parsing fails we just get 0 We used ifNull to avoid getting NULL values in our table. If an incoming value is NULL , the ifNull function is setting the value to an empty string Count the number of rows {#count-row-numbers}
{"source_file": "youtube-dislikes.md"}
[ 0.021405784413218498, -0.09073590487241745, -0.056128911674022675, 0.02546045184135437, 0.007779343519359827, 0.03474658727645874, 0.021456025540828705, 0.004040652420371771, -0.04381216689944267, 0.060248203575611115, 0.057705506682395935, -0.04140876233577728, 0.06879386305809021, -0.034...