id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
24c1ced7-6aa1-4094-9768-ddec05476191 | | [Open Targets](https://www.opentargets.org/) | Genome Research | Genome Search | [Twitter, October 2021](https://twitter.com/OpenTargets/status/1452570865342758913?s=20), [Blog](https://blog.opentargets.org/graphql/) | — | — |
| [OpenLIT](https://openlit.io/) | Software & Technology | OTEL Monitoring with AI | [GitHub](https://github.com/openlit/openlit) | — | — |
| [OpenMeter](https://openmeter.io) | Expense Management | Main product | [Offical blog post, 2023](https://openmeter.io/blog/how-openmeter-uses-clickhouse-for-usage-metering#heading-querying-historical-usage) | — | — | | {"source_file": "adopters.md"} | [
-0.04487544298171997,
-0.027064405381679535,
-0.07444453984498978,
0.05776431784033775,
0.1231905072927475,
-0.09069045633077621,
-0.0400422103703022,
0.00600002147257328,
-0.04832477495074272,
0.01882551610469818,
0.027836987748742104,
-0.012629461474716663,
0.014840265735983849,
-0.00833... |
d4ad5e84-f387-4954-8bcb-dd4e9a0b4771 | | [OpenReplay](https://openreplay.com/) | Product Analytics | Session Replay | [Docs](https://docs.openreplay.com/en/deployment/openreplay-admin/) | — | — |
| [Opensee](https://opensee.io/) | Financial Analytics | Main product | [Blog Post, February 2022](https://clickhouse.com/blog/opensee-analyzing-terabytes-of-financial-data-a-day-with-clickhouse/) [Blog Post, December 2021](https://opensee.io/news/from-moscow-to-wall-street-the-remarkable-journey-of-clickhouse/) | — | — |
| [Oppo](https://www.oppo.com/cn/) | Hardware | Consumer Electronics Company | ClickHouse Meetup in Chengdu, April 2024 | — | — | | {"source_file": "adopters.md"} | [
-0.036135897040367126,
-0.034135494381189346,
-0.05656993016600609,
0.02177697792649269,
0.02385563962161541,
-0.031171133741736412,
-0.022452102974057198,
0.012060241773724556,
-0.024466169998049736,
-0.027421170845627785,
0.0036109143402427435,
0.03941089287400246,
-0.03390289470553398,
... |
619b2f6e-56f1-4ec2-b519-09214d82f8fc | | [OpsVerse](https://opsverse.io/) | Observability | — | [Twitter, 2022](https://twitter.com/OpsVerse/status/1584548242100219904) | — | — |
| [Opstrace](https://opstrace.com/) | Observability | — | [Source code](https://gitlab.com/gitlab-org/opstrace/jaeger-clickhouse/-/blob/main/README.md) | — | — |
| [Outerbase](https://www.outerbase.com/) | Software & Technology | Database Interface | [Official Website](https://www.outerbase.com/) | — | — | | {"source_file": "adopters.md"} | [
-0.10203530639410019,
-0.051012374460697174,
-0.03710969164967537,
-0.016610311344265938,
0.05432962253689766,
-0.10857487469911575,
-0.06470691412687302,
0.03744567558169365,
-0.03585991635918617,
0.012180362828075886,
0.019063714891672134,
-0.0016253471840173006,
0.0892384722828865,
-0.0... |
94d06a0a-b04c-4764-aaea-456c97cc7851 | | [Oxide](https://oxide.computer/) | Hardware & Software | Server Control Plane | [GitHub Repository](https://github.com/oxidecomputer/omicron) | — | — |
| [OZON](https://corp.ozon.com/) | E-commerce | — | [Official website](https://job.ozon.ru/vacancy/razrabotchik-clickhouse-ekspluatatsiya-40991870/) | — | — |
| [PITS Globale Datenrettungsdienste](https://www.pitsdatenrettung.de/) | Data Recovery | Analytics | | — | — | | {"source_file": "adopters.md"} | [
-0.11321195214986801,
0.054746393114328384,
-0.09373011440038681,
0.051404114812612534,
0.08242423832416534,
-0.11756984889507294,
0.0018143863417208195,
0.08036866039037704,
-0.07153616845607758,
-0.05735490843653679,
0.03254816308617592,
0.024174116551876068,
-0.057115085422992706,
0.001... |
42b9d88c-6946-41e2-90bd-38ee0339ede2 | | [PRANA](https://prana-system.com/en/) | Industrial predictive analytics | Main product | [News (russian), Feb 2021](https://habr.com/en/news/t/541392/) | — | — |
| [Pace](https://www.paceapp.com/) | Marketing & Sales | Internal app | ClickHouse Cloud user | — | — |
| [Panelbear](https://panelbear.com/) | Analytics | Monitoring and Analytics | [Tech Stack, November 2020](https://panelbear.com/blog/tech-stack/) | — | — | | {"source_file": "adopters.md"} | [
-0.11673640459775925,
-0.06016867980360985,
-0.11348958313465118,
0.02257874235510826,
0.08091194182634354,
-0.003758285893127322,
0.027831144630908966,
-0.005964154377579689,
-0.030524274334311485,
0.024522915482521057,
-0.006424527149647474,
0.019609887152910233,
0.003189271315932274,
-0... |
4b7fb1b2-a10f-4410-b5ca-d89bdf2d8834 | | [Papermark](https://www.papermark.io/) | Software & Technology | Document Sharing & Analytics | [Twitter, September 2023](https://twitter.com/mfts0/status/1698670144367567263) | — | — |
| [Parcel Perform](https://www.parcelperform.com/) | E-commerce | Real-Time Analaytics | [Ho Chi Minh Meetup talk, April 2025](https://clickhouse.com/videos/hochiminh-meetup-parcel-perform-clickhouse-at-a-midsize-company) | — | — |
| [Percent 百分点](https://www.percent.cn/) | Analytics | Main Product | [Slides in Chinese, June 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/4.%20ClickHouse万亿数据双中心的设计与实践%20.pdf) | — | — | | {"source_file": "adopters.md"} | [
-0.07088857889175415,
-0.0017484138952568173,
-0.007441618945449591,
-0.02838905341923237,
0.029148736968636513,
0.02025790512561798,
0.025099683552980423,
0.0015966299688443542,
-0.014226607978343964,
-0.07951687276363373,
0.11721473187208176,
0.030052905902266502,
0.0026865405961871147,
... |
dc028d21-3409-47bf-8f79-a9de5cca5b31 | | [Percona](https://www.percona.com/) | Performance analysis | Percona Monitoring and Management | [Official website, Mar 2020](https://www.percona.com/blog/2020/03/30/advanced-query-analysis-in-percona-monitoring-and-management-with-direct-clickhouse-access/) | — | — |
| [Phare](https://phare.io/) | Uptime Monitoring | Main Product | [Official website, Aug 2023](https://docs.phare.io/changelog/platform/2023#faster-monitoring-statistics) | — | — |
| [PheLiGe](https://phelige.com/about) | Software & Technology | Genetic Studies | [Academic Paper, November 2020](https://academic.oup.com/nar/article/49/D1/D1347/6007654?login=false) | — | — | | {"source_file": "adopters.md"} | [
-0.07573098689317703,
-0.0018680694047361612,
-0.09496553242206573,
-0.007528624031692743,
0.014305693097412586,
-0.07400964200496674,
-0.000465126009657979,
0.08908659219741821,
-0.03664363920688629,
0.01991802640259266,
0.04005279019474983,
0.022301355376839638,
0.018388455733656883,
0.0... |
9651ed0e-2851-4540-803c-1c169b07a1e5 | | [Physics Wallah](https://www.pw.live/) | Education Technology | Real-Time Analytics | [Gurgaon Meetup talk, March 2025](https://clickhouse.com/videos/gurgaon-meetup-clickhouse-at-physics-wallah) | — | — |
| [PingCAP](https://pingcap.com/) | Analytics | Real-Time Transactional and Analytical Processing | [GitHub, TiFlash/TiDB](https://github.com/pingcap/tiflash) | — | — |
| [Pirsch](https://pirsch.io/) | Software & Technology | Web Analytics | [Hacker News, April 2023](https://news.ycombinator.com/item?id=35692201) | — | — | | {"source_file": "adopters.md"} | [
-0.07914242893457413,
-0.05532676354050636,
-0.0998878926038742,
0.012418192811310291,
0.018609430640935898,
-0.06665308773517609,
0.007416912354528904,
-0.023166846483945847,
-0.033691078424453735,
0.04016173630952835,
0.037226129323244095,
-0.04307899251580238,
0.003345537930727005,
0.02... |
73dc677d-3422-4656-9145-20400f3d3d0f | | [Piwik PRO](https://piwik.pro/) | Web Analytics | — | [Official website, Dec 2018](https://piwik.pro/blog/piwik-pro-clickhouse-faster-efficient-reports/) | — | — |
| [Plane](https://plane.so/) | Software & Technology | Project Management | [Twitter, September 2023](https://twitter.com/vamsi_kurama/status/1699593472704176441) | — | — |
| [Plausible](https://plausible.io/) | Analytics | Main Product | [Blog Post, December 2021](https://clickhouse.com/blog/plausible-analytics-uses-click-house-to-power-their-privacy-friendly-google-analytics-alternative) [Twitter, June 2020](https://twitter.com/PlausibleHQ/status/1273889629087969280) | — | — | | {"source_file": "adopters.md"} | [
-0.0978558138012886,
-0.020078178495168686,
-0.09376481920480728,
-0.004165879916399717,
0.09176979213953018,
-0.03314285725355148,
-0.04068110138177872,
0.005991289857774973,
0.01560191623866558,
0.04974762350320816,
0.031472720205783844,
0.056726954877376556,
0.06103724241256714,
0.03804... |
ab6e2ba5-f379-462c-b5b0-ae12fecd113a | | [PoeticMetric](https://www.poeticmetric.com/) | Metrics | Main Product | Community Slack, April 2022 | — | — |
| [PQL](https://pql.dev/) | Software & Technology | SQL Query Tool | [Official Website](https://pql.dev/) | — | — |
| [Portkey AI](https://portkey.ai/) | LLMOps | Main Product | [LinkedIn post, August 2023](https://www.linkedin.com/feed/update/urn:li:activity:7094676373826330626/) | — | — | | {"source_file": "adopters.md"} | [
-0.028544563800096512,
-0.09927212446928024,
0.004857816267758608,
-0.027759060263633728,
-0.003603630932047963,
0.04090544953942299,
0.004148502368479967,
-0.04229762777686119,
-0.008075691759586334,
0.01605048030614853,
0.020985711365938187,
-0.00027574540581554174,
0.03029411844909191,
... |
81698954-0478-4d57-a675-a5878736e2f4 | | [PostHog](https://posthog.com/) | Product Analytics | Main Product | [Release Notes, October 2020](https://posthog.com/blog/the-posthog-array-1-15-0), [Blog, November 2021](https://posthog.com/blog/how-we-turned-clickhouse-into-our-eventmansion) | — | — |
| [Postmates](https://postmates.com/) | Delivery | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=188) | — | — |
| [Pragma Innovation](http://www.pragma-innovation.fr/) | Telemetry and Big Data Analysis | Main product | [Slides in English, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/4_pragma_innovation.pdf) | — | — | | {"source_file": "adopters.md"} | [
-0.0741364061832428,
-0.001385586685501039,
-0.050606198608875275,
-0.020341265946626663,
0.02385437861084938,
-0.008525953628122807,
-0.032214146107435226,
-0.00096021598437801,
-0.09526173770427704,
-0.03801136091351509,
0.03016291931271553,
0.09283581376075745,
-0.06265386939048767,
-0.... |
590b172f-7cdb-436b-9ad6-31ca9baa611e | | [Prefect](https://www.prefect.io/) | Software & Technology | Main Product | [Blog, May 2024](https://clickhouse.com/blog/prefect-event-driven-workflow-orchestration-powered-by-clickhouse) | — | — |
| [Propel](https://www.propeldata.com/) | Analytics | Main product | [Blog, January 2024](https://www.propeldata.com/blog/how-to-store-json-in-clickhouse-the-right-way) | — | — |
| [Property Finder](https://www.propertyfinder.com/) | Real Estate | — | ClickHouse Cloud user | — | — | | {"source_file": "adopters.md"} | [
-0.04304241016507149,
-0.040628038346767426,
-0.027370931580662727,
0.013837425038218498,
-0.027865847572684288,
-0.004565800074487925,
-0.014137718826532364,
-0.03877753019332886,
-0.027724433690309525,
-0.018371334299445152,
-0.02613237127661705,
0.03840494528412819,
0.03097444213926792,
... |
92d2cec0-235d-4b32-8d65-3bbe1db6bc34 | | [QINGCLOUD](https://www.qingcloud.com/) | Cloud services | Main product | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/4.%20Cloud%20%2B%20TSDB%20for%20ClickHouse%20张健%20QingCloud.pdf) | — | — |
| [Qrator](https://qrator.net) | DDoS protection | Main product | [Blog Post, March 2019](https://blog.qrator.net/en/clickhouse-ddos-mitigation_37/) | — | — |
| [Qualified](https://www.qualified.com/) | Sales Pipeline Management | Data and Messaging layers | [Job posting, Nov 2022](https://news.ycombinator.com/item?id=33425109) | — | — | | {"source_file": "adopters.md"} | [
-0.12164641916751862,
0.021313868463039398,
0.06176472455263138,
-0.054817602038383484,
0.002396671799942851,
-0.0522272102534771,
0.027709532529115677,
-0.029124990105628967,
-0.01745927520096302,
-0.023904433473944664,
0.023319559171795845,
0.037911076098680496,
0.042648784816265106,
0.0... |
9a5bd5bc-e87b-4cb1-9f14-19460136c956 | | [Qube Research & Technologies](https://www.qube-rt.com/) | FinTech | Analysis | ClickHouse Cloud user | — | — |
| [QuickCheck](https://quickcheck.ng/) | FinTech | Analytics | [Blog post, May 2022](https://clickhouse.com/blog/how-quickcheck-uses-clickhouse-to-bring-banking-to-the-unbanked/) | — | — |
| [R-Vision](https://rvision.pro/en/) | Information Security | — | [Article in Russian, December 2021](https://www.anti-malware.ru/reviews/R-Vision-SENSE-15) | — | — | | {"source_file": "adopters.md"} | [
-0.08125877380371094,
-0.036317378282547,
-0.08789732307195663,
-0.029358739033341408,
0.07700561732053757,
0.001085131079889834,
0.07012822479009628,
0.014303741045296192,
0.012129440903663635,
-0.005614499095827341,
0.012674089521169662,
0.11165887117385864,
0.017573826014995575,
0.02250... |
7767f8f7-b514-43a7-a338-b95fe0b241b3 | | [RELEX](https://relexsolutions.com) | Supply Chain Planning | Forecasting | [Meetup Video, December 2022](https://www.youtube.com/watch?v=wyOSMR8l-DI&list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U&index=16) [Slides, December 2022](https://presentations.clickhouse.com/meetup65/CRUDy%20OLAP.pdf) | — | — |
| [Raiffeisenbank](https://www.rbinternational.com/) | Banking | Analytics | [Lecture in Russian, December 2020](https://cs.hse.ru/announcements/421965599.html) | — | — |
| [Railway](https://railway.app/) | Software & Technology | PaaS Software Tools | [Changelog, May 2023](https://railway.app/changelog/2023-05-19-horizontal-scaling#logs-are-getting-faster) | — | — | | {"source_file": "adopters.md"} | [
-0.07314115017652512,
-0.0627717673778534,
-0.022177984938025475,
0.05764424800872803,
0.016451716423034668,
0.0043036784045398235,
-0.06664161384105682,
0.05582809075713158,
-0.023843536153435707,
-0.029870405793190002,
-0.03220969811081886,
0.05158090963959694,
-0.05874354764819145,
0.03... |
d20d8b1b-005a-44ba-b060-0bc54b6b770e | | [Rambler](https://rambler.ru) | Internet services | Analytics | [Talk in Russian, April 2018](https://medium.com/@ramblertop/разработка-api-clickhouse-для-рамблер-топ-100-f4c7e56f3141) | — | — |
| [Ramp](https://ramp.com/) | Financial Services | Real-Time Analytics, Fraud Detection | [NYC Meetup, March 2024](https://www.youtube.com/watch?v=7BtUgUb4gCs) | — | — |
| [Rapid Delivery Analytics](https://rda.team/) | Retail | Analytics | ClickHouse Cloud user | — | — | | {"source_file": "adopters.md"} | [
-0.07825276255607605,
-0.04434121772646904,
-0.06812357902526855,
-0.009030668996274471,
0.014408254064619541,
0.022220252081751823,
0.046365655958652496,
-0.05204181745648384,
0.0010320673463866115,
-0.03536359220743179,
-0.0335223414003849,
0.0241148229688406,
-0.01699248142540455,
0.051... |
d5cc8986-ed1a-45be-a4e8-3d8df0ab21c5 | | [Real Estate Analytics](https://rea-global.com/) | Software & Technology | Real-time Analytics | [Singapore meetup, February 2025](https://clickhouse.com/videos/singapore-meetup-real-estate-analytics-clickhouse-journey) , [Blog, April 2025](https://clickhouse.com/blog/how-real-estate-analytics-made-its-data-pipeline-50x-faster-with-clickhouse) | — | — |
| [Releem](https://releem.com/) | Databases | MySQL management | [Blog 2024](https://releem.com/blog/whats-new-at-releem-june-2023) | — | — |
| [Replica](https://replicahq.com) | Urban Planning | Analytics | [Job advertisement](https://boards.greenhouse.io/replica/jobs/5547732002?gh_jid=5547732002) | — | — | | {"source_file": "adopters.md"} | [
-0.03663128241896629,
-0.06681209057569504,
-0.037846025079488754,
0.06777612119913101,
0.010682100430130959,
-0.04959851875901222,
-0.04428475722670555,
-0.09455259889364243,
-0.05243625119328499,
0.012899966910481453,
-0.01151486486196518,
-0.007493257522583008,
0.06062983721494675,
0.05... |
ddbc9b6b-c3f1-4fd7-aed3-8cb1ebaf1a3d | | [Request Metrics](https://requestmetrics.com/) | Software & Technology | Observability | [Hacker News, May 2023](https://news.ycombinator.com/item?id=35982281) | — | — |
| [Rengage](https://rengage.ai/) | Marketing Analytics | Main product | [Bellevue Meetup, August 2024](https://github.com/user-attachments/files/17135804/Rengage.-.clickhouse.1.pptx) | — | — |
| [Resmo](https://replicahq.com) | Software & Technology | Cloud Security & Asset Management | | 1 c7g.xlarge node, | | | {"source_file": "adopters.md"} | [
-0.08845377713441849,
-0.02216612920165062,
0.004176602698862553,
-0.00856112688779831,
0.020525576546788216,
-0.011776036582887173,
0.03572874888777733,
-0.0121670663356781,
0.001412649406120181,
0.007806690875440836,
0.015112495981156826,
0.0334307886660099,
0.051984380930662155,
-0.0182... |
69efa4e2-524b-4ef5-bccf-f2b8df190954 | | [Retell](https://retell.cc/) | Speech synthesis | Analytics | [Blog Article, August 2020](https://vc.ru/services/153732-kak-sozdat-audiostati-na-vashem-sayte-i-zachem-eto-nuzhno) | — | — |
| [Rivet](https://rivet.gg/) | Software & Technology | Gamer Server Scaling | [HackerNews, August 2023](https://news.ycombinator.com/item?id=37188659) | — | — |
| [Roblox](https://www.roblox.com/) | Gaming | Safety operations | [San Francisco Meetup, September 2024](https://github.com/user-attachments/files/17135964/2024-09-05-ClickHouse-Meetup-Roblox.1.pdf) | — | 100M events per day | | {"source_file": "adopters.md"} | [
-0.061364833265542984,
-0.05610770359635353,
0.006870961748063564,
-0.034108057618141174,
0.023554550483822823,
0.030960839241743088,
0.05063136667013168,
-0.050147850066423416,
-0.01211057510226965,
0.02802092395722866,
-0.051511816680431366,
0.012760624289512634,
0.05318982154130936,
-0.... |
7da3c999-239e-4d77-950f-93e178f3f741 | | [Rokt](https://www.rokt.com/) | Software & Technology | eCommerce | [Meetup Video, December 2022](https://www.youtube.com/watch?v=BEP07Edor-0&list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U&index=10) [Slides, December 2022](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup67/Building%20the%20future%20of%20reporting%20at%20Rokt.pdf) | — | — |
| [Rollbar](https://www.rollbar.com) | Software Development | Main Product | [Official Website](https://www.rollbar.com) | — | — |
| [Rspamd](https://rspamd.com/) | Antispam | Analytics | [Official Website](https://rspamd.com/doc/modules/clickhouse.html) | — | — | | {"source_file": "adopters.md"} | [
-0.14786702394485474,
-0.0269169919192791,
-0.09907934814691544,
-0.0226289052516222,
0.06676401942968369,
-0.019979819655418396,
-0.054582398384809494,
0.051903337240219116,
-0.058677416294813156,
-0.050910476595163345,
-0.005381098948419094,
0.048777010291814804,
0.017518719658255577,
-0... |
640f9ac7-96c3-4f20-bd1b-f09cb29b0761 | | [RuSIEM](https://rusiem.com/en) | SIEM | Main Product | [Official Website](https://rusiem.com/en/products/architecture) | — | — |
| [RunReveal](https://runreveal.com/) | SIEM | Main Product | [SF Meetup, Nov 2023](https://www.youtube.com/watch?v=rVZ9JnbzHTQ&list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U&index=25) | — | — |
| [S7 Airlines](https://www.s7.ru) | Airlines | Metrics, Logging | [Talk in Russian, March 2019](https://www.youtube.com/watch?v=nwG68klRpPg&t=15s) | — | — | | {"source_file": "adopters.md"} | [
0.0020663326140493155,
-0.055723268538713455,
-0.04090467467904091,
0.03943756967782974,
0.007461988367140293,
0.04932567849755287,
0.018646664917469025,
0.04669053852558136,
-0.043227605521678925,
-0.03340946137905121,
-0.06522397696971893,
0.05316166952252388,
-0.06773857772350311,
0.022... |
986bf1df-33a3-4bdb-b1d7-2f4416053a71 | | [SEMrush](https://www.semrush.com/) | Marketing | Main product | [Slides in Russian, August 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/5_semrush.pdf) | — | — |
| [SESCO Trading](https://www.sescotrading.com/) | Financial | Analysis | ClickHouse Cloud user | — | — |
| [SGK](http://www.sgk.gov.tr/wps/portal/sgk/tr) | Government Social Security | Analytics | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup35/ClickHouse%20Meetup-Ramazan%20POLAT.pdf) | — | — | | {"source_file": "adopters.md"} | [
-0.05423063039779663,
-0.0128437839448452,
-0.07511892169713974,
0.07663603872060776,
0.006364874541759491,
0.04726996272802353,
0.030077936127781868,
0.02326550893485546,
-0.057556524872779846,
-0.057312577962875366,
0.027059543877840042,
0.047509655356407166,
-0.010930377058684826,
-0.05... |
bc51cfcd-8732-4567-899c-33fe1cee45b1 | | [SMI2](https://smi2.ru/) | News | Analytics | [Blog Post in Russian, November 2017](https://habr.com/ru/company/smi2/blog/314558/) | — | — |
| [Synclite](https://www.synclite.io/) | Software & Technology | Database Replication | [Official Website](https://www.synclite.io/) | — | — |
| [SQLPad](https://getsqlpad.com/en/introduction/) | Software & Technology | Web-based SQL editor. | [GitHub, March 2023](https://github.com/sqlpad/sqlpad/blob/master/server/package.json#L43) | — | — | | {"source_file": "adopters.md"} | [
-0.1057814210653305,
-0.1298028975725174,
-0.12646430730819702,
0.030967952683568,
0.010426944121718407,
-0.05094373598694801,
-0.01187098864465952,
-0.01715060882270336,
-0.07114715129137039,
0.009950868785381317,
0.045183468610048294,
0.04665941745042801,
0.057662490755319595,
-0.0256550... |
48951ca2-63c7-4624-9b00-f7aa0f2655f2 | | [Santiment](https://www.santiment.net) | Behavioral analytics for the crypto market | Main Product | [Github repo](https://github.com/santiment/sanbase2) | — | — |
| [Sber](https://www.sberbank.com/index) | Banking, Fintech, Retail, Cloud, Media | — | [Job advertisement, March 2021](https://career.habr.com/vacancies/1000073536) | 128 servers | >1 PB |
| [Scale8](https://scale8.com) | Tag Management and Analytics | Main product | [Source Code](https://github.com/scale8/scale8) | — | — | | {"source_file": "adopters.md"} | [
-0.0035047109704464674,
-0.06400150805711746,
-0.12519535422325134,
0.02142600156366825,
0.08548826724290848,
0.006321916822344065,
-0.013865035958588123,
0.04873048886656761,
-0.03794606402516365,
-0.06105800345540047,
-0.03565753251314163,
0.0007725479081273079,
0.05072873830795288,
-0.0... |
666067f0-13b4-4a4c-81d1-0934cc951ad7 | | [Scarf](https://about.scarf.sh/) | Open source analytics | Main product | [Meetup, December 2024](https://github.com/ClickHouse/clickhouse-presentations/blob/master/2024-meetup-san-francisco/ClickHouse%20Meet-up%20talk_%20Scarf%20%26%20Clickhouse.pdf) | — | — |
| [Scireum GmbH](https://www.scireum.de/) | e-Commerce | Main product | [Talk in German, February 2020](https://www.youtube.com/watch?v=7QWAn5RbyR4) | — | — |
| [ScrapingBee](https://www.scrapingbee.com/) | Software & Technology | Web scraping API | [Twitter, January 2024](https://twitter.com/PierreDeWulf/status/1745464855723986989) | — | — | | {"source_file": "adopters.md"} | [
-0.10598815232515335,
-0.04147800803184509,
-0.04737095162272453,
0.03973245248198509,
0.0683736577630043,
-0.027053363621234894,
-0.04055838659405708,
0.011171798221766949,
-0.06003503501415253,
-0.08758651465177536,
0.05386565998196602,
-0.031041519716382027,
-0.000990282860584557,
-0.00... |
1a4a9cbe-b30a-4874-9515-e1f46147814c | | [ScratchDB](https://scratchdb.com/) | Software & Technology | Serverless Analytics | [GitHub](https://github.com/scratchdata/ScratchDB) | — | — |
| [Segment](https://segment.com/) | Data processing | Main product | [Slides, 2019](https://slides.com/abraithwaite/segment-clickhouse) | 9 * i3en.3xlarge nodes 7.5TB NVME SSDs, 96GB Memory, 12 vCPUs | — |
| [sembot.io](https://sembot.io/) | Shopping Ads | — | A comment on LinkedIn, 2020 | — | — | | {"source_file": "adopters.md"} | [
-0.05744234472513199,
-0.06544018536806107,
-0.0012620707275345922,
0.017383882775902748,
0.046508993953466415,
-0.06665651500225067,
0.014810150489211082,
0.021578356623649597,
-0.08146435767412186,
0.012645559385418892,
-0.021505244076251984,
0.008727842010557652,
0.060333844274282455,
-... |
a637dfc7-48d2-4cca-b5bc-e8bbd83b28b1 | | [Sendinblue](https://www.sendinblue.com/) | Software & Technology | Segmentation | [Blog, February 2023](https://engineering.sendinblue.com/segmentation-to-target-the-right-audience/) | 100 nodes | — |
| [Sentio](https://www.sentio.xyz/) | Software & Technology | Observability | [Twitter, April 2023](https://twitter.com/qiaokan/status/1650736518955438083) | — | — |
| [Sentry](https://sentry.io/) | Software Development | Main product | [Blog Post in English, May 2019](https://blog.sentry.io/2019/05/16/introducing-snuba-sentrys-new-search-infrastructure) | — | — | | {"source_file": "adopters.md"} | [
-0.04279308393597603,
-0.027895692735910416,
-0.028618764132261276,
-0.006751283071935177,
0.0695033147931099,
-0.07746655493974686,
-0.002762845018878579,
0.03544347733259201,
-0.04022722318768501,
0.0004926176043227315,
-0.023825205862522125,
-0.02617691271007061,
0.06307429820299149,
0.... |
8405bb2a-33ad-49c8-a8ce-f0c72a254507 | | [seo.do](https://seo.do/) | Analytics | Main product | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup35/CH%20Presentation-%20Metehan%20Çetinkaya.pdf) | — | — |
| [Serif Health](https://www.serifhealth.com/) | Healthcare | Price transparency platform | [Chicago meetup, Sempteber 2019](https://clickhouse.com/videos/price-transparency-made-easy) | — | — |
| [Serverless](https://www.serverless.com/) | Serverless Apps | Metrics | ClickHouse Cloud user | — | — | | {"source_file": "adopters.md"} | [
-0.045496001839637756,
-0.006662761326879263,
-0.03149767220020294,
0.002257568296045065,
0.047008320689201355,
0.013352255336940289,
0.0029631410725414753,
0.008801768533885479,
-0.005256692413240671,
-0.08496836572885513,
0.03639186918735504,
0.05902643874287605,
-0.016260633245110512,
-... |
aad15a27-a457-4ce1-88ab-7752fe866d6d | | [ServiceNow](https://www.servicenow.com/) | Managed Services | Qualitative Mobile Analytics | [Meetup Video, January 2023](https://www.youtube.com/watch?v=b4Pmpx3iRK4&list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U&index=6) [Slides, January 2023](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup68/Appsee%20Remodeling%20-%20ClickHouse.pdf) | — | — |
| [Sewer AI](https://www.sewerai.com/) | Software & Technology | — | ClickHouse Cloud user | — | — |
| [Shopee](https://www.shopee.com/) | E-Commerce | Distributed Tracing | [Meetup Video, April 2024](https://youtu.be/_BVy-V2wy9s?feature=shared) [Slides, April 2024](https://raw.githubusercontent.com/ClickHouse/clickhouse-presentations/master/2024-meetup-singapore-1/Shopee%20-%20Distributed%20Tracing%20in%20ClickHouse.pdf) [Blog Post, June 2024](https://clickhouse.com/blog/seeing-the-big-picture-shopees-journey-to-distributed-tracing-with-clickhouse) | — | — | | {"source_file": "adopters.md"} | [
-0.1250268816947937,
-0.08527304977178574,
0.009436221793293953,
-0.024685679003596306,
0.019645482301712036,
-0.03764098137617111,
-0.006724617909640074,
-0.0637315958738327,
-0.023485805839300156,
0.0017752773128449917,
0.046851739287376404,
-0.00851356890052557,
0.0020879681687802076,
0... |
91fd4d1b-c2c4-4849-9862-f8f6caff545b | | [SigNoz](https://signoz.io/) | Observability Platform | Main Product | [Source code](https://github.com/SigNoz/signoz) , [Bangalore Meetup, February 2025](https://clickhouse.com/videos/lessons-from-building-a-scalable-observability-backend) | — | — |
| [Sina](http://english.sina.com/index.html) | News | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/6.%20ClickHouse最佳实践%20高鹏_新浪.pdf) | — | — |
| [Sinch](https://www.sinch.com/) | Software & Technology | Customer Communications Cloud | [HackerNews, May 2023](https://news.ycombinator.com/item?id=36042104) | — | — | | {"source_file": "adopters.md"} | [
-0.12161125242710114,
0.002779309870675206,
-0.031209567561745644,
-0.03740152716636658,
0.019451063126325607,
-0.016320640221238136,
-0.008888940326869488,
0.0009102236363105476,
-0.09577805548906326,
-0.05438550189137459,
0.043227069079875946,
-0.0002634749107528478,
0.0208733007311821,
... |
89b327f3-a70f-4c85-9b0f-a75572d6abff | | [Sipfront](https://www.sipfront.com/) | Software Development | Analytics | [Twitter, October 2021](https://twitter.com/andreasgranig/status/1446404332337913895?s=20) | — | — |
| [SiteBehaviour Analytics](https://www.sitebehaviour.com/) | Software | Analytics | [Twitter, 2024](https://twitter.com/developer_jass/status/1763023792970883322) | — | — |
| [Skool](https://www.skool.com/) | Community platform | Behavioral/Experimentation Analytics | [SoCal Meetup, August 2024](https://github.com/user-attachments/files/17081161/ClickHouse.Meetup.pptx) | — | 100m rows/day | | {"source_file": "adopters.md"} | [
-0.0549849309027195,
-0.04568789154291153,
-0.07875534892082214,
-0.010545983910560608,
0.057991303503513336,
-0.06587047874927521,
-0.002786640077829361,
-0.02602716162800789,
-0.022784225642681122,
0.03693505376577377,
-0.0012242121156305075,
-0.00962794292718172,
0.03159564733505249,
0.... |
b9de3570-5f21-47d1-8b79-daa41ef6f31f | | [slido](https://www.slido.com/) | Software & Technology | Q&A and Polling | [Meetup, April 2023](https://www.linkedin.com/events/datameetup-3-spotlightondataeng7048914766324473856/about/) | — | — |
| [Solarwinds](https://www.solarwinds.com/) | Software & Technology | Main product | [Talk in English, March 2018](https://www.youtube.com/watch?v=w8eTlqGEkkw) | — | — |
| [Sonrai Security](https://sonraisecurity.com/) | Cloud Security | — | Slack comments | — | — | | {"source_file": "adopters.md"} | [
-0.12293438613414764,
-0.019942710176110268,
-0.008545235730707645,
-0.07628552615642548,
0.048144515603780746,
-0.00276226201094687,
0.043656643480062485,
-0.030201978981494904,
-0.016084764152765274,
-0.001562893157824874,
-0.0132954316213727,
0.02657141350209713,
-0.021504174917936325,
... |
f97a81de-d9ea-452a-90b8-d61ccaae5af2 | | [Spark New Zealand](https://www.spark.co.nz/) | Telecommunications | Security Operations | [Blog Post, Feb 2020](https://blog.n0p.me/2020/02/2020-02-05-dnsmonster/) | — | — |
| [Spec](https://www.specprotected.com/) | Software & Technology | Online Fraud Detection | [HackerNews, August 2023](https://news.ycombinator.com/item?id=36965317) | — | — |
| [spectate](https://spectate.net/) | Software & Technology | Monitoring & Incident Management | [Twitter, August 2023](https://twitter.com/BjarnBronsveld/status/1700458569861112110) | — | — | | {"source_file": "adopters.md"} | [
-0.1065911129117012,
-0.031892869621515274,
-0.005937385372817516,
0.00158594676759094,
0.11481014639139175,
-0.003922062460333109,
0.035484444350004196,
-0.010280642658472061,
-0.07940063625574112,
0.03900519758462906,
0.018194178119301796,
0.025404062122106552,
0.008556890301406384,
0.07... |
74d57917-d19e-4402-b20e-a0f142241d7b | | [Splio](https://splio.com/en/) | Software & Technology | Individuation Marketing | [Slack, September 2023](https://clickhousedb.slack.com/archives/C04N3AU38DV/p1693995069023669) | — | — |
| [Splitbee](https://splitbee.io) | Analytics | Main Product | [Blog Post, Mai 2021](https://splitbee.io/blog/new-pricing) | — | — |
| [Splunk](https://www.splunk.com/) | Business Analytics | Main product | [Slides in English, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/splunk.pdf) | — | — | | {"source_file": "adopters.md"} | [
-0.0769011601805687,
-0.08626183867454529,
-0.07601325213909149,
0.05693044513463974,
0.060109421610832214,
-0.002437365474179387,
-0.01808713749051094,
0.031010450795292854,
-0.0625096932053566,
0.028272440657019615,
0.03542690724134445,
0.013734916225075722,
0.010704225860536098,
0.05024... |
dc9f845a-28b9-4847-98f9-47050efab8e9 | | [Spotify](https://www.spotify.com) | Music | Experimentation | [Slides, July 2018](https://www.slideshare.net/glebus/using-clickhouse-for-experimentation-104247173) | — | — |
| [Staffbase](https://staffbase.com/en/) | Software & Technology | Internal Communications | [ClickHouse Slack, April 2023](https://clickhousedb.slack.com/archives/C04N3AU38DV/p1682781081062859) | — | — |
| [Staffcop](https://www.staffcop.ru/) | Information Security | Main Product | [Official website, Documentation](https://www.staffcop.ru/sce43) | — | — | | {"source_file": "adopters.md"} | [
-0.037346500903367996,
-0.12105109542608261,
0.020216168835759163,
-0.02626432105898857,
0.06784646958112717,
0.026607714593410492,
0.020068811252713203,
0.01645869016647339,
-0.03462028503417969,
-0.028503376990556717,
-0.0019068571273237467,
0.0655418187379837,
0.037099942564964294,
-0.0... |
b9528fb6-7db4-4556-960f-d2448edefc6a | | [Statsig](https://statsig.com/) | Software & Technology | Real-time analytics | [Video](https://clickhouse.com/videos/statsig) | — | — |
| [Streamkap](https://streamkap.com/) | Data Platform | — | [Video](https://clickhouse.com/videos/switching-from-elasticsearch-to-clickhouse) | — | — |
| [Suning](https://www.suning.com/) | E-Commerce | User behaviour analytics | [Blog article](https://www.sohu.com/a/434152235_411876) | — | — | | {"source_file": "adopters.md"} | [
-0.024714943021535873,
-0.01627299189567566,
-0.075585275888443,
0.03336254507303238,
0.10861960798501968,
0.022443806752562523,
0.019760064780712128,
0.03846217319369316,
-0.055679723620414734,
-0.02094951830804348,
0.004117142874747515,
0.014825969003140926,
-0.010517491959035397,
0.0286... |
91a983c2-fa99-417e-82cd-37219770eb14 | | [Superology](https://superology.com/) | Software & Technology | Customer Analytics | [Blog Post, June 2022](https://clickhouse.com/blog/collecting-semi-structured-data-from-kafka-topics-using-clickhouse-kafka-engine) | — | — |
| [Superwall](https://superwall.me/) | Monetization Tooling | Main product | [Word of mouth, Jan 2022](https://github.com/ClickHouse/ClickHouse/pull/33573) | — | — |
| [SwarmFarm Robotics](https://www.swarmfarm.com/) | Agriculture & Technology | Main Product | [Meetup Slides](https://github.com/ClickHouse/clickhouse-presentations/blob/master/2024-meetup-melbourne-2/Talk%20Track%202%20-%20Harvesting%20Big%20Data%20at%20SwarmFarm%20Robotics%20-%20Angus%20Ross.pdf) | — | — | | {"source_file": "adopters.md"} | [
-0.12331049889326096,
-0.08416403830051422,
-0.07356486469507217,
0.03737470507621765,
0.0168911125510931,
-0.03783933445811272,
-0.08320316672325134,
0.013633379712700844,
-0.08320014178752899,
0.01874372363090515,
0.037844281643629074,
-0.00017486646538600326,
-0.01088410522788763,
0.000... |
49178035-af84-476c-8965-bfa663d51e34 | | [Swetrix](https://swetrix.com) | Analytics | Main Product | [Source code](https://github.com/swetrix/swetrix-api) | — | — |
| [Swift Navigation](https://www.swiftnav.com/) | Geo Positioning | Data Pipelines | [Job posting, Nov 2022](https://news.ycombinator.com/item?id=33426590) | — | — |
| [Synerise](https://synerise.com/) | ML&AI | Feature Store | [Presentation, April 2020](https://www.slideshare.net/AndrzejMichaowski/feature-store-solving-antipatterns-in-mlsystems-232829863) | — | — | | {"source_file": "adopters.md"} | [
-0.05968759208917618,
-0.08159366250038147,
-0.10010237991809845,
-0.002172545064240694,
0.03059307113289833,
-0.07478218525648117,
-0.03209349513053894,
-0.0075678639113903046,
-0.08173267543315887,
-0.02060054801404476,
0.06799470633268356,
0.05927984043955803,
-0.015781685709953308,
0.0... |
84dd8297-46a9-4bed-b578-03ca081f50a6 | | [Synpse](https://synpse.net/) | Application Management | Main Product | [Twitter, January 2022](https://twitter.com/KRusenas/status/1483571168363880455) | — | — |
| [Synq](https://www.synq.io) | Software & Technology | Main Product | [Blog Post, July 2023](https://clickhouse.com/blog/building-a-unified-data-platform-with-clickhouse) | — | — |
| [sumsub](https://sumsub.com/) | Software & Technology | Verification platform | [Meetup, July 2022](https://www.youtube.com/watch?v=F74bBGSMwGo) | — | — | | {"source_file": "adopters.md"} | [
-0.07571570575237274,
-0.08515851199626923,
0.013485115952789783,
-0.07585548609495163,
0.055341433733701706,
-0.0426616296172142,
-0.019660593941807747,
0.010717574506998062,
-0.02352897636592388,
-0.007682164665311575,
-0.03360862284898758,
0.051508210599422455,
0.03378729149699211,
-0.0... |
71c271f1-8103-4070-8f24-63e2376a4f10 | | [Talo Game Services](https://trytalo.com) | Gaming Analytics | Event-based player analytics | [Blog, August 2024](https://trytalo.com/blog/events-clickhouse-migration) | — | — |
| [Tasrie IT Services](https://tasrieit.com) | Software & Technology | Analytics | [Blog, January 2025](https://tasrieit.com/how-tasrie-it-services-uses-clickhouse) | — | — |
| [TURBOARD](https://www.turboard.com/) | BI Analytics | — | [Official website](https://www.turboard.com/blogs/clickhouse) | — | — | | {"source_file": "adopters.md"} | [
-0.04880495369434357,
-0.06062505394220352,
-0.06873394548892975,
-0.042471617460250854,
0.04114808514714241,
-0.0472816526889801,
0.04766242951154709,
-0.010023868642747402,
-0.08509325981140137,
0.04405505955219269,
0.015758628025650978,
0.024411480873823166,
-0.013652547262609005,
0.000... |
93e5c5fc-c772-44f0-af0a-f194e9fd163e | | [TeamApt](https://www.teamapt.com/) | FinTech | Data Processing | [Official Website](https://www.teamapt.com/) | — | — |
| [Teamtailor](https://www.teamtailor.com/en/) | Recruitment Software | — | ClickHouse Cloud user | — | — |
| [Tekion](https://tekion.com/) | Automotive Retail | Clickstream Analytics | [Blog Post, June 2024](https://clickhouse.com/blog/tekion-adopts-clickhouse-cloud-to-power-application-performance-and-metrics-monitoring) | — | — | | {"source_file": "adopters.md"} | [
-0.1002037301659584,
0.0026302363257855177,
-0.04782037436962128,
0.014926421456038952,
0.05565987154841423,
-0.06580184400081635,
0.019441286101937294,
0.0035379414912313223,
-0.003078082110732794,
-0.0003545567742548883,
0.028714017942547798,
-0.0078948475420475,
0.039612606167793274,
-0... |
ea9e47fa-ac0c-4cf3-85ea-9c30c6fea8bc | | [Temporal](https://www.tencentmusic.com/) | Infrastructure software | Observability product | [Bellevue Meetup, August 2024](https://github.com/user-attachments/files/17135746/Temporal.Supercharged.Observability.with.ClickHouse.pdf) | — | — |
| [Tencent Music Entertainment (TME)](https://www.tencentmusic.com/) | BigData | Data processing | [Blog in Chinese, June 2020](https://cloud.tencent.com/developer/article/1637840) | — | — |
| [Tencent](https://www.tencent.com) | Big Data | Data processing | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/5.%20ClickHouse大数据集群应用_李俊飞腾讯网媒事业部.pdf) | — | — | | {"source_file": "adopters.md"} | [
-0.06490433216094971,
-0.00904643815010786,
0.041132602840662,
-0.061837803572416306,
0.03751824423670769,
0.04525787755846977,
-0.03064877912402153,
-0.04670325666666031,
-0.03475373983383179,
-0.025906266644597054,
0.08173208683729172,
0.024888772517442703,
0.035528022795915604,
0.012698... |
557def3c-daf7-45f8-9502-760ec70d336d | | [Tencent](https://www.tencent.com) | Messaging | Logging | [Talk in Chinese, November 2019](https://youtu.be/T-iVQRuw-QY?t=5050) | — | — |
| [Teralytics](https://www.teralytics.net/) | Mobility | Analytics | [Tech blog](https://www.teralytics.net/knowledge-hub/visualizing-mobility-data-the-scalability-challenge) | — | — |
| [Tesla](https://www.tesla.com/) | Electric vehicle and clean energy company | — | [Vacancy description, March 2021](https://news.ycombinator.com/item?id=26306170) | — | — | | {"source_file": "adopters.md"} | [
-0.04377824440598488,
0.018855012953281403,
0.03940018638968468,
-0.000027485120881465264,
0.05573013424873352,
-0.004364367574453354,
-0.047288864850997925,
0.04362865537405014,
-0.08242306113243103,
-0.08943155407905579,
0.01643369346857071,
0.023458704352378845,
0.06377767771482468,
0.0... |
6f8c2b00-2377-482a-a8e6-49d136c42e21 | | [The Guild](https://the-guild.dev/) | API Platform | Monitoring | [Blog Post, November 2022](https://clickhouse.com/blog/100x-faster-graphql-hive-migration-from-elasticsearch-to-clickhouse) [Blog](https://the-guild.dev/blog/graphql-hive-and-clickhouse) | — | — |
| [Theia](https://theia.so/) | Software & Technology | Threat Intelligence | [Twitter, July 2023](https://twitter.com/jreynoldsdev/status/1680639586999980033) | — | — |
| [ThirdWeb](https://thirdweb.com/) | Software & Technology | Blockchain analysis | ClickHouse Cloud user | — | — | | {"source_file": "adopters.md"} | [
-0.04637729376554489,
-0.037440717220306396,
-0.0450083389878273,
-0.018543297424912453,
0.06390051543712616,
-0.07290896028280258,
-0.05299137532711029,
-0.06543949991464615,
-0.05094001069664955,
0.08158797025680542,
-0.00767179112881422,
-0.03336635231971741,
0.06173752620816231,
-0.005... |
c7d3f5aa-bc42-4eae-bb5a-5ca54284d521 | | [Timeflow](https://timeflow.systems) | Software | Analytics | [Blog](https://timeflow.systems/why-we-moved-from-druid-to-clickhouse/ ) | — | — |
| [Timeplus](https://www.timeplus.com/) | Software & Technology | Streaming Analytics | [Meetup, August 2023](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/294472987/) | — | — |
| [Tinybird](https://www.tinybird.co/) | Real-time Data Products | Data processing | [Official website](https://www.tinybird.co/) | — | — | | {"source_file": "adopters.md"} | [
-0.10410487651824951,
-0.03946015611290932,
-0.020470283925533295,
-0.01962229795753956,
0.07804960012435913,
-0.06632209569215775,
-0.014541720040142536,
-0.054631952196359634,
-0.008630247786641121,
-0.045961566269397736,
-0.0447213239967823,
0.012283788062632084,
-0.04864144325256348,
-... |
e9dda866-b8e6-4a6a-b101-141ee7d4e70e | | [TrackingPlan](https://www.trackingplan.com/) | Marketing & Sales | Monitoring | ClickHouse Cloud user | — | — |
| [Traffic Stars](https://trafficstars.com/) | AD network | — | [Slides in Russian, May 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup15/lightning/ninja.pdf) | 300 servers in Europe/US | 1.8 PiB, 700 000 insert rps (as of 2021) |
| [Trillabit](https://www.trillabit.com/home) | Software & Technology | Business Intelligence | [Blog, January 2023](https://clickhouse.com/blog/trillabit-utilizes-the-power-of-clickhouse-for-fast-scalable-results-within-their-self-service-search-driven-analytics-offering) | — | — | | {"source_file": "adopters.md"} | [
-0.08652839809656143,
-0.071106918156147,
-0.13576707243919373,
-0.00817599706351757,
-0.031522609293460846,
0.005933783948421478,
-0.0005243258201517165,
-0.019996412098407745,
-0.07881856709718704,
-0.023869290947914124,
0.029471931979060173,
0.010614689439535141,
0.03627419471740723,
0.... |
1490251f-de01-41d6-9c7c-63ea8a6419a6 | | [Trip.com](https://trip.com/) | Travel Services | Logging | [Meetup, March 2023](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup71/Trip.com.pdf) | — | — |
| [Turkcell](https://www.turkcell.com.tr/) | Telecom | BI Analytics | [YouTube Video](https://www.youtube.com/watch?v=ckvPBgXl82Q) | 2 nodes | 2TB per day, 100TB in total |
| [Tweeq](https://tweeq.sa/en) | Fintech | Spending Account | [Engineering Blog, May 2024](https://engineering.tweeq.sa/tweeq-data-platform-journey-and-lessons-learned-clickhouse-dbt-dagster-and-superset-fa27a4a61904) | — | — | | {"source_file": "adopters.md"} | [
-0.004295141901820898,
-0.04705718532204628,
0.009067758917808533,
0.06412956118583679,
0.007065357640385628,
-0.05435348302125931,
0.021116871386766434,
-0.0028258024249225855,
0.006303838919848204,
0.027737652882933617,
-0.0006664589745923877,
-0.025235844776034355,
-0.014940595254302025,
... |
43f24f9c-c2f8-41cc-b110-f0169d838940 | | [Twilio](https://www.twilio.com) | Customer engagement | Twilio SendGrid | [Meetup presentation, September 2024](https://github.com/user-attachments/files/17135790/twilio-sendgrid-clickhouse.1.pdf) | — | 10b events/day |
| [Tydo](https://www.tydo.com) | Customer intelligence | Customer Segmentation product | [SoCal meetup, August 2024](https://github.com/user-attachments/files/17081169/Tydo_ClickHouse.Presentation.8_21.pdf) | — | — |
| [URLsLab](https://www.urlslab.com/) | Software & Technology | WordPress Plugin | [Twitter, July 2023](https://twitter.com/Yasha_br/status/1680224776302784514) , [Twitter, September 2023](https://twitter.com/Yasha_br/status/1698724654339215812) | — | — | | {"source_file": "adopters.md"} | [
-0.08271579444408417,
-0.021043986082077026,
0.011718451045453548,
-0.011869572103023529,
0.05065078288316727,
-0.029373157769441605,
-0.032722242176532745,
0.04120669141411781,
0.06316155195236206,
0.024102726951241493,
0.009804749861359596,
0.06286855787038803,
-0.006902036257088184,
0.0... |
c594eabb-b2f0-4f8b-bb73-7390f3b59b72 | | [UTMSTAT](https://hello.utmstat.com/) | Analytics | Main product | [Blog post, June 2020](https://vc.ru/tribuna/133956-striming-dannyh-iz-servisa-skvoznoy-analitiki-v-clickhouse) | — | — |
| [Uber](https://www.uber.com) | Taxi | Logging | [Slides, February 2020](https://presentations.clickhouse.com/meetup40/uber.pdf) | — | — |
| [Uptrace](https://uptrace.dev/) | Software | Tracing Solution | [Official website, March 2021](https://uptrace.dev/open-source/) | — | — | | {"source_file": "adopters.md"} | [
-0.04962294548749924,
-0.021542035043239594,
-0.10933894664049149,
-0.0023227583151310682,
0.005438171327114105,
-0.05528116226196289,
0.022161217406392097,
0.025775032117962837,
-0.0702129378914833,
0.07398517429828644,
0.027675030753016472,
0.0031937737949192524,
0.05255310609936714,
0.0... |
b5727cf6-373d-4f5c-8f87-70c73fe953cc | | [UseTech](https://usetech.com/) | Software Development | — | [Job Posting, December 2021](https://vk.com/wall136266658_2418) | — | — |
| [Usermaven](https://usermaven.com/) | Product Analytics | Main Product | [HackerNews, January 2023](https://news.ycombinator.com/item?id=34404706) | — | — |
| [VKontakte](https://vk.com) | Social Network | Statistics, Logging | [Slides in Russian, August 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/3_vk.pdf) | — | — | | {"source_file": "adopters.md"} | [
-0.06690524518489838,
-0.023584775626659393,
-0.03520180657505989,
0.024491772055625916,
0.06557363271713257,
-0.011292713694274426,
0.01102606300264597,
0.013416728004813194,
-0.05747934803366661,
-0.05614811182022095,
-0.047474149614572525,
0.044981539249420166,
0.023284273222088814,
0.0... |
f0e919b7-2cba-4607-85ee-5c1d523352c9 | | [VKontech](https://vkontech.com/) | Distributed Systems | Migrating from MongoDB | [Blog, January 2022](https://vkontech.com/migrating-your-reporting-queries-from-a-general-purpose-db-mongodb-to-a-data-warehouse-clickhouse-performance-overview/) | — | — |
| [VMware](https://www.vmware.com/) | Cloud | VeloCloud, SDN | [Product documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.3/com.vmware.vcom.metrics.doc/GUID-A9AD72E1-C948-4CA2-971B-919385AB3CA8.html) | — | — |
| [Valueleaf Services Pvt.Ltd](http://valueleaf.com/) | Software & Technology | Martech platform, Ads platform and Loan aggregator platform | [ClickHouse Slack, April 2023](https://clickhousedb.slack.com/archives/C04N3AU38DV/p1681122299263959) | — | — | | {"source_file": "adopters.md"} | [
-0.029500173404812813,
-0.008187585510313511,
-0.06190196052193642,
0.014143361710011959,
0.05941307172179222,
-0.04416346549987793,
-0.05504634231328964,
0.07125579565763474,
-0.01818956434726715,
0.030812693759799004,
-0.010084624402225018,
-0.04462246224284172,
0.0124813886359334,
-0.03... |
39763ad0-cfbc-4224-9607-5539e9a98708 | | [Vantage](https://www.vantage.sh/) | Software & Technology | Cloud Cost Management | [Meetup, April 2023](https://www.youtube.com/watch?v=gBgXcHM_ldc) , [ClickHouse Blog, June 2023](https://clickhouse.com/blog/nyc-meetup-report-vantages-journey-from-redshift-and-postgres-to-clickhouse) | — | — |
| [Velvet](https://www.usevelvet.com/) | Database management | Main product | [Job listing](https://news.ycombinator.com/item?id=38492272) | — | — |
| [Vercel](https://vercel.com/) | Traffic and Performance Analytics | — | Direct reference, October 2021 | — | — | | {"source_file": "adopters.md"} | [
-0.08502665907144547,
-0.06873548775911331,
-0.04778514802455902,
0.0582103468477726,
0.017938485369086266,
0.009525993838906288,
-0.03208154812455177,
-0.031091418117284775,
-0.08549365401268005,
-0.0882687047123909,
-0.045455820858478546,
0.04915577545762062,
-0.012833066284656525,
-0.04... |
8e1dfaf6-717d-46a2-b30c-8c0fd4a1a094 | | [Vexo](https://www.vexo.co/) | App development | Analytics | [Twitter, December 2023](https://twitter.com/FalcoAgustin/status/1737161334213546279) | — | — |
| [Vidazoo](https://www.vidazoo.com/) | Advertising | Analytics | ClickHouse Cloud user | — | — |
| [Vimeo](https://vimeo.com/) | Video hosting | Analytics | [Blog post](https://medium.com/vimeo-engineering-blog/clickhouse-is-in-the-house-413862c8ac28) | — | — | | {"source_file": "adopters.md"} | [
-0.02836357057094574,
-0.06198066473007202,
-0.010068502277135849,
-0.056761473417282104,
0.14983750879764557,
-0.023939132690429688,
0.0025908518582582474,
-0.010621384717524052,
0.000498208508361131,
-0.0285553690046072,
-0.003625355428084731,
0.03291701525449753,
0.034429989755153656,
0... |
ecb78fdb-8844-4953-83fa-d91e69c57749 | | [Visiology](https://visiology.com/) | Business intelligence | Analytics | [Company website](https://visiology.com/) | — | — |
| [Voltmetrix](https://voltmetrix.com/) | Database management | Main product | [Blog post](https://voltmetrix.com/blog/voltmetrix-iot-manufacturing-use-case/) | — | — |
| [Voltus](https://www.voltus.co/) | Energy | — | [Blog Post, Aug 2022](https://medium.com/voltus-engineering/migrating-kafka-to-amazon-msk-1f3a7d45b5f2) | — | — | | {"source_file": "adopters.md"} | [
-0.07631093263626099,
0.012152615003287792,
-0.08050532639026642,
-0.014761615544557571,
0.030686281621456146,
-0.04652973636984825,
-0.09425627440214157,
0.018022453412413597,
-0.006980315316468477,
-0.04240847006440163,
0.05589420348405838,
-0.027974093332886696,
-0.034519094973802567,
-... |
6530acae-527c-4335-ad2f-bd6493516035 | | [W3 Analytics](https://w3analytics.hottoshotto.com/) | Blockchain | Dashboards for NFT analytics | [Community Slack, July 2023](https://clickhousedb.slack.com/archives/CU170QE9H/p1689907164648339) | — | — |
| [WSPR Live](https://wspr.live/) | Software & Technology | WSPR Spot Data | [Twitter, April 2023](https://twitter.com/HB9VQQ/status/1652723207475015680) | — | — |
| [Waitlyst](https://waitlyst.co/) | Software & Technology | AI Customer Journey Management | [Twitter, June 2023](https://twitter.com/aaronkazah/status/1668261900554051585) | — | — | | {"source_file": "adopters.md"} | [
-0.09377603232860565,
-0.06797277182340622,
-0.06892196834087372,
0.04622543230652809,
0.11094754934310913,
-0.06939934194087982,
0.011949697509407997,
-0.040135204792022705,
-0.005946402903646231,
0.007662897929549217,
0.02928386628627777,
-0.008078275248408318,
-0.027455903589725494,
0.0... |
bc7f3cd9-7a32-4862-b074-44ea74ba6674 | | [Walmart Labs](https://www.walmartlabs.com/) | Internet, Retail | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=144) | — | — |
| [WanShanData](http://wanshandata.com/home) | Software & Technology | Main Product | [Meetup Slides in Chinese](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup56/wanshandata.pdf) | — | — |
| [Wargaming](https://wargaming.com/en/) | Games | | [Interview](https://habr.com/en/post/496954/) | — | — | | {"source_file": "adopters.md"} | [
-0.1140848696231842,
0.04001427814364433,
0.0005470424075610936,
-0.013750049285590649,
0.002920839935541153,
0.038005780428647995,
0.0009272340685129166,
-0.028197025880217552,
-0.0382867194712162,
-0.04225826635956764,
-0.02290225774049759,
0.06340985000133514,
-0.06088399142026901,
-0.0... |
785810b2-d709-4982-be19-4b3a43e482e4 | | [WebGazer](https://www.webgazer.io/) | Uptime Monitoring | Main Product | Community Slack, April 2022 | — | — |
| [WebScrapingHQ](https://www.webscrapinghq.com/) | Software & Technology | Web scraping API | [X, Novemeber 2024](https://x.com/harsh_maur/status/1862129151806968054) | — | — |
| [Weights & Biases](https://wandb.ai/site) | Software & Technology | LLM Monitoring | [Twitter, April 2024](https://github.com/user-attachments/files/17157064/Lukas.-.Clickhouse.pptx) | — | — | | {"source_file": "adopters.md"} | [
-0.13378533720970154,
-0.050301872193813324,
-0.03713138774037361,
0.0062590306624770164,
0.07557477056980133,
-0.07035636156797409,
-0.03649521991610527,
-0.015785230323672295,
-0.06963294744491577,
-0.010766332037746906,
-0.010606911964714527,
-0.02851717919111252,
-0.013509717769920826,
... |
4bdfaf18-afa1-4b20-a9af-abc9b34dd9aa | | [Wildberries](https://www.wildberries.ru/) | E-commerce | | [Official website](https://it.wildberries.ru/) | — | — |
| [Wisebits](https://wisebits.com/) | IT Solutions | Analytics | [Slides in Russian, May 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup22/strategies.pdf) | — | — |
| [Workato](https://www.workato.com/) | Automation Software | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=334) | — | — | | {"source_file": "adopters.md"} | [
-0.09471898525953293,
-0.039597801864147186,
-0.06229516863822937,
0.01603771187365055,
0.015736443921923637,
0.05344716086983681,
0.052009519189596176,
0.006869890261441469,
-0.09716877341270447,
-0.05456956475973129,
-0.03169963136315346,
0.06303147971630096,
-0.005823719315230846,
0.037... |
50f31b2f-8c52-43cc-a539-4b4ef523abb3 | | [Wowza](https://www.wowza.com/) | Video Platform | Streaming Analytics | ClickHouse Cloud user | — | — |
| [Wundergraph](https://wundergraph.com/) | Software & Technology | API Platform | [Twitter, February 2023](https://twitter.com/dustindeus/status/1628757807913750531) | — | — |
| [Xata](https://xata.io/) | Software & Technology | SaaS observability dashboard | [Twitter, March 2024](https://x.com/tudor_g/status/1770517054971318656) | — | — | | {"source_file": "adopters.md"} | [
-0.054305657744407654,
0.003219106001779437,
-0.028936440125107765,
-0.021625827997922897,
0.07526005804538727,
-0.025528913363814354,
0.011079713702201843,
-0.0639800876379013,
-0.06891686469316483,
0.016395006328821182,
0.020633557811379433,
-0.008226112462580204,
0.00020155530364718288,
... |
c0c9b7ad-ca29-418b-b04f-f2d48fbd05ec | | [Xenoss](https://xenoss.io/) | Martech, Adtech development | — | [Official website](https://xenoss.io/big-data-solution-development) | — | — |
| [Xiaoxin Tech](http://www.xiaoxintech.cn/) | Education | Common purpose | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/sync-clickhouse-with-mysql-mongodb.pptx) | — | — |
| [Ximalaya](https://www.ximalaya.com/) | Audio sharing | OLAP | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/ximalaya.pdf) | — | — | | {"source_file": "adopters.md"} | [
-0.07460032403469086,
-0.07409635931253433,
-0.04615117609500885,
-0.004043439868837595,
0.013855728320777416,
-0.042525649070739746,
-0.024275165051221848,
-0.009737799875438213,
-0.05172428861260414,
-0.029073577374219894,
0.0246221125125885,
-0.015021332539618015,
-0.010122970677912235,
... |
648f822b-84bf-4370-91c3-8a5e83abca88 | | [YTsaurus](https://ytsaurus.tech/) | Distributed Storage and Processing | Main product | [Main website](https://ytsaurus.tech/) | — | — |
| [Yandex Cloud](https://cloud.yandex.ru/services/managed-clickhouse) | Public Cloud | Main product | [Talk in Russian, December 2019](https://www.youtube.com/watch?v=pgnak9e_E0o) | — | — |
| [Yandex DataLens](https://cloud.yandex.ru/services/datalens) | Business Intelligence | Main product | [Slides in Russian, December 2019](https://presentations.clickhouse.com/meetup38/datalens.pdf) | — | — | | {"source_file": "adopters.md"} | [
-0.07822003960609436,
-0.06377112120389938,
-0.018378879874944687,
-0.007827786728739738,
0.03923575580120087,
-0.04592479020357132,
0.021598435938358307,
-0.011755989864468575,
-0.021408231928944588,
-0.010144298896193504,
0.019287720322608948,
0.08287932723760605,
0.014517986215651035,
0... |
1a438c96-0b8e-43d6-92f9-79847df9b74d | | [Yandex Market](https://market.yandex.ru/) | e-Commerce | Metrics, Logging | [Talk in Russian, January 2019](https://youtu.be/_l1qP0DyBcA?t=478) | — | — |
| [Yandex Metrica](https://metrica.yandex.com) | Web analytics | Main product | [Slides, February 2020](https://presentations.clickhouse.com/meetup40/introduction/#13) | 630 servers in one cluster, 360 servers in another cluster, 1862 servers in one department | 133 PiB / 8.31 PiB / 120 trillion records |
| [Yellowfin](https://www.yellowfinbi.com) | Analytics | Main product | [Integration](https://www.yellowfinbi.com/campaign/yellowfin-9-whats-new#el-30219e0e) | — | — | | {"source_file": "adopters.md"} | [
-0.00972325075417757,
-0.11215782165527344,
-0.03795478492975235,
0.051055796444416046,
-0.0148481335490942,
-0.014265524223446846,
0.02085360698401928,
-0.017665298655629158,
-0.007109861355274916,
-0.03584422543644905,
-0.008143682964146137,
0.03372906893491745,
0.08385321497917175,
0.03... |
c43d2e17-93a5-40cf-9407-b14ea8000dc2 | | [Yotascale](https://www.yotascale.com/) | Cloud | Data pipeline | [LinkedIn (Accomplishments)](https://www.linkedin.com/in/adilsaleem/) | — | 2 bn records/day |
| [Your Analytics](https://www.your-analytics.org/) | Product Analytics | Main Product | [Twitter, November 2021](https://twitter.com/mikenikles/status/1459737241165565953) | — | — |
| [Zagrava Trading](https://zagravagames.com/en/) | — | — | [Job offer, May 2021](https://twitter.com/datastackjobs/status/1394707267082063874) | — | — | | {"source_file": "adopters.md"} | [
-0.07059283554553986,
0.027919020503759384,
0.014373120851814747,
0.012983487918972969,
0.03263551741838455,
0.0036699448246508837,
0.0076631479896605015,
0.01593763567507267,
-0.028150182217359543,
0.030392155051231384,
-0.044421736150979996,
-0.009459839202463627,
0.023623758926987648,
0... |
747ae7b2-6e64-4b73-a774-28dac3ad75a4 | | [Zappi](https://www.zappi.io/web/) | Software & Technology | Market Research | [Twitter Post, June 2024](https://x.com/HermanLangner/status/1805870318218580004)) | — | — |
| [Zerodha](https://zerodha.tech/) | Stock Broker | Logging | [Blog, March 2023](https://zerodha.tech/blog/logging-at-zerodha/) | — | — |
| [Zing Data](https://getzingdata.com/) | Software & Technology | Business Intelligence | [Blog, May 2023](https://clickhouse.com/blog/querying-clickhouse-on-your-phone-with-zing-data) | — | — | | {"source_file": "adopters.md"} | [
-0.0731792151927948,
0.038249336183071136,
-0.054305002093315125,
0.004991633351892233,
0.021783730015158653,
-0.06367774307727814,
-0.010773926042020321,
0.013075723312795162,
-0.043819937855005264,
-0.019010573625564575,
0.059652939438819885,
0.029108401387929916,
0.05016285181045532,
0.... |
1b0a7256-43b5-4426-98f0-dc1aebac49ca | | [Zipy](https://www.zipy.ai/) | Software & Technology | User session debug | [Blog, April 2023](https://www.zipy.ai/blog/deep-dive-into-clickhouse) | — | — |
| [Zomato](https://www.zomato.com/) | Online food ordering | Logging | [Blog, July 2023](https://www.zomato.com/blog/building-a-cost-effective-logging-platform-using-clickhouse-for-petabyte-scale) | — | — |
| [Zomato](https://www.zomato.com/ncr/golf-course-order-online) | Food & Beverage | Food Delivery | [Blog 2024](https://blog.zomato.com/building-a-cost-effective-logging-platform-using-clickhouse-for-petabyte-scale) | — | — | | {"source_file": "adopters.md"} | [
-0.031064383685588837,
0.03514479100704193,
-0.03382602334022522,
0.04698989912867546,
0.025386445224285126,
-0.0903935432434082,
-0.0117197185754776,
0.0017071443144232035,
-0.0718267634510994,
0.025352241471409798,
0.019382216036319733,
0.027068939059972763,
-0.024650439620018005,
0.0592... |
64a22288-4dbb-4aaa-9d06-5d15fa99ef1d | | [Zoox](https://zoox.com/) | Software & Technology | Observability | [Job listing](https://www.linkedin.com/jobs/view/senior-software-engineer-observability-at-zoox-4139400247) | — | — |
| [АС "Стрела"](https://magenta-technology.ru/sistema-upravleniya-marshrutami-inkassacii-as-strela/) | Transportation | — | [Job posting, Jan 2022](https://vk.com/topic-111905078_35689124?post=3553) | — | — |
| [ДомКлик](https://domclick.ru/) | Real Estate | — | [Article in Russian, October 2021](https://habr.com/ru/company/domclick/blog/585936/) | — | — | | {"source_file": "adopters.md"} | [
-0.0494760163128376,
0.01727931760251522,
-0.0514662079513073,
0.04247315600514412,
-0.00647779693827033,
-0.0010699369013309479,
-0.0034869161900132895,
0.019660264253616333,
-0.09363612532615662,
-0.06769587844610214,
-0.05330236256122589,
0.06022656336426735,
0.0026935043279081583,
0.08... |
80c48d0c-915e-4e4f-902e-dbdabd6c1886 | | [МКБ](https://mkb.ru/) | Bank | Web-system monitoring | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/mkb.pdf) | — | — |
| [ООО «МПЗ Богородский»](https://shop.okraina.ru/) | Agriculture | — | [Article in Russian, November 2020](https://cloud.yandex.ru/cases/okraina) | — | — |
| [ЦВТ](https://htc-cs.ru/) | Software Development | Metrics, Logging | [Blog Post, March 2019, in Russian](https://vc.ru/dev/62715-kak-my-stroili-monitoring-na-prometheus-clickhouse-i-elk) | — | — | | {"source_file": "adopters.md"} | [
-0.04938717931509018,
0.020049402490258217,
-0.024348121136426926,
0.04106125235557556,
0.04268379509449005,
-0.05621805414557457,
0.017644263803958893,
0.0814153328537941,
-0.055508751422166824,
-0.030995244160294533,
-0.007032391615211964,
-0.0076177301816642284,
-0.01856178604066372,
0.... |
5c25091e-e3e6-400b-8def-5290fb42c22b | | [ЦФТ](https://cft.ru/) | Banking, Financial products, Payments | — | [Meetup in Russian, April 2020](https://team.cft.ru/events/162) | — | — |
| [Цифровой Рабочий](https://promo.croc.ru/digitalworker) | Industrial IoT, Analytics | — | [Blog post in Russian, March 2021](https://habr.com/en/company/croc/blog/548018/) | — | — | | {"source_file": "adopters.md"} | [
-0.04056337848305702,
-0.0181218683719635,
-0.036544013768434525,
0.03483790159225464,
0.028084252029657364,
-0.0056328377686440945,
0.06007593125104904,
0.057578008621931076,
-0.028543036431074142,
-0.06769730150699615,
-0.0168626606464386,
-0.03114173375070095,
-0.014894172549247742,
0.0... |
efa681ea-42b7-49ea-b7e8-71535ea001d9 | title: 'Getting started with chDB'
sidebar_label: 'Getting started'
slug: /chdb/getting-started
description: 'chDB is an in-process SQL OLAP Engine powered by ClickHouse'
keywords: ['chdb', 'embedded', 'clickhouse-lite', 'in-process', 'in process']
doc_type: 'guide'
Getting started with chDB
In this guide, we're going to get up and running with the Python variant of chDB.
We'll start by querying a JSON file on S3, before creating a table in chDB based on the JSON file, and doing some queries on the data.
We'll also see how to have queries return data in different formats, including Apache Arrow and Panda, and finally we'll learn how to query Pandas DataFrames.
Setup {#setup}
Let's first create a virtual environment:
bash
python -m venv .venv
source .venv/bin/activate
And now we'll install chDB.
Make sure you have version 2.0.3 or higher:
bash
pip install "chdb>=2.0.2"
And now we're going to install
ipython
:
bash
pip install ipython
We're going to use
ipython
to run the commands in the rest of the guide, which you can launch by running:
bash
ipython
We'll also be using Pandas and Apache Arrow in this guide, so let's install those libraries too:
bash
pip install pandas pyarrow
Querying a JSON file in S3 {#querying-a-json-file-in-s3}
Let's now have a look at how to query a JSON file that's stored in an S3 bucket.
The
YouTube dislikes dataset
contains more than 4 billion rows of dislikes on YouTube videos up to 2021.
We're going to work with one of the JSON files from that dataset.
Import chdb:
python
import chdb
We can write the following query to describe the structure of one of the JSON files:
python
chdb.query(
"""
DESCRIBE s3(
's3://clickhouse-public-datasets/youtube/original/files/' ||
'youtubedislikes_20211127161229_18654868.1637897329_vid.json.zst',
'JSONLines'
)
SETTINGS describe_compact_output=1
"""
)
text
"id","Nullable(String)"
"fetch_date","Nullable(String)"
"upload_date","Nullable(String)"
"title","Nullable(String)"
"uploader_id","Nullable(String)"
"uploader","Nullable(String)"
"uploader_sub_count","Nullable(Int64)"
"is_age_limit","Nullable(Bool)"
"view_count","Nullable(Int64)"
"like_count","Nullable(Int64)"
"dislike_count","Nullable(Int64)"
"is_crawlable","Nullable(Bool)"
"is_live_content","Nullable(Bool)"
"has_subtitles","Nullable(Bool)"
"is_ads_enabled","Nullable(Bool)"
"is_comments_enabled","Nullable(Bool)"
"description","Nullable(String)"
"rich_metadata","Array(Tuple(
call Nullable(String),
content Nullable(String),
subtitle Nullable(String),
title Nullable(String),
url Nullable(String)))"
"super_titles","Array(Tuple(
text Nullable(String),
url Nullable(String)))"
"uploader_badges","Nullable(String)"
"video_badges","Nullable(String)"
We can also count the number of rows in that file: | {"source_file": "getting-started.md"} | [
-0.01103296224027872,
-0.05706993490457535,
-0.08974787592887878,
0.042169053107500076,
-0.011908391490578651,
-0.03646634891629219,
-0.0210895873606205,
0.01716376468539238,
-0.018212903290987015,
-0.06369083374738693,
0.010339878499507904,
-0.024510154500603676,
0.03194453939795494,
-0.0... |
3c60730d-8fa1-43a5-a8f7-1b3e3f9b1640 | We can also count the number of rows in that file:
python
chdb.query(
"""
SELECT count()
FROM s3(
's3://clickhouse-public-datasets/youtube/original/files/' ||
'youtubedislikes_20211127161229_18654868.1637897329_vid.json.zst',
'JSONLines'
)"""
)
text
336432
This file contains just over 300,000 records.
chdb doesn't yet support passing in query parameters, but we can pull out the path and pass it in via an f-String.
python
path = 's3://clickhouse-public-datasets/youtube/original/files/youtubedislikes_20211127161229_18654868.1637897329_vid.json.zst'
python
chdb.query(
f"""
SELECT count()
FROM s3('{path}','JSONLines')
"""
)
:::warning
This is fine to do with variables defined in your program, but don't do it with user-provided input, otherwise your query is open to SQL injection.
:::
Configuring the output format {#configuring-the-output-format}
The default output format is
CSV
, but we can change that via the
output_format
parameter.
chDB supports the ClickHouse data formats, as well as
some of its own
, including
DataFrame
, which returns a Pandas DataFrame:
```python
result = chdb.query(
f"""
SELECT is_ads_enabled, count()
FROM s3('{path}','JSONLines')
GROUP BY ALL
""",
output_format="DataFrame"
)
print(type(result))
print(result)
```
text
<class 'pandas.core.frame.DataFrame'>
is_ads_enabled count()
0 False 301125
1 True 35307
Or if we want to get back an Apache Arrow table:
```python
result = chdb.query(
f"""
SELECT is_live_content, count()
FROM s3('{path}','JSONLines')
GROUP BY ALL
""",
output_format="ArrowTable"
)
print(type(result))
print(result)
```
```text
pyarrow.Table
is_live_content: bool
count(): uint64 not null
is_live_content: [[false,true]]
count(): [[315746,20686]]
```
Creating a table from JSON file {#creating-a-table-from-json-file}
Next, let's have a look at how to create a table in chDB.
We need to use a different API to do that, so let's first import that:
python
from chdb import session as chs
Next, we'll initialize a session.
If we want the session to be persisted to disk, we need to provide a directory name.
If we leave it blank, the database will be in-memory and lost as soon as we kill the Python process.
python
sess = chs.Session("gettingStarted.chdb")
Next, we'll create a database:
python
sess.query("CREATE DATABASE IF NOT EXISTS youtube")
Now we can create a
dislikes
table based on the schema from the JSON file, using the
CREATE...EMPTY AS
technique.
We'll use the
schema_inference_make_columns_nullable
setting so that column types aren't all made
Nullable
.
python
sess.query(f"""
CREATE TABLE youtube.dislikes
ORDER BY fetch_date
EMPTY AS
SELECT *
FROM s3('{path}','JSONLines')
SETTINGS schema_inference_make_columns_nullable=0
"""
)
We can then use the
DESCRIBE
clause to inspect the schema: | {"source_file": "getting-started.md"} | [
-0.007208862341940403,
-0.054647352546453476,
-0.09185970574617386,
0.04519312083721161,
-0.045268766582012177,
0.04604001343250275,
0.049451977014541626,
0.06768494844436646,
0.015926195308566093,
-0.03594554215669632,
-0.0014391541481018066,
-0.0066490513272583485,
0.07072339951992035,
-... |
f24488f5-493d-4ee5-bb82-119669ea6cf0 | We can then use the
DESCRIBE
clause to inspect the schema:
python
sess.query(f"""
DESCRIBE youtube.dislikes
SETTINGS describe_compact_output=1
"""
)
text
"id","String"
"fetch_date","String"
"upload_date","String"
"title","String"
"uploader_id","String"
"uploader","String"
"uploader_sub_count","Int64"
"is_age_limit","Bool"
"view_count","Int64"
"like_count","Int64"
"dislike_count","Int64"
"is_crawlable","Bool"
"is_live_content","Bool"
"has_subtitles","Bool"
"is_ads_enabled","Bool"
"is_comments_enabled","Bool"
"description","String"
"rich_metadata","Array(Tuple(
call String,
content String,
subtitle String,
title String,
url String))"
"super_titles","Array(Tuple(
text String,
url String))"
"uploader_badges","String"
"video_badges","String"
Next, let's populate that table:
python
sess.query(f"""
INSERT INTO youtube.dislikes
SELECT *
FROM s3('{path}','JSONLines')
SETTINGS schema_inference_make_columns_nullable=0
"""
)
We could also do both these steps in one go, using the
CREATE...AS
technique.
Let's create a different table using that technique:
python
sess.query(f"""
CREATE TABLE youtube.dislikes2
ORDER BY fetch_date
AS
SELECT *
FROM s3('{path}','JSONLines')
SETTINGS schema_inference_make_columns_nullable=0
"""
)
Querying a table {#querying-a-table}
Finally, let's query the table:
sql
df = sess.query("""
SELECT uploader, sum(view_count) AS viewCount, sum(like_count) AS likeCount, sum(dislike_count) AS dislikeCount
FROM youtube.dislikes
GROUP BY ALL
ORDER BY viewCount DESC
LIMIT 10
""",
"DataFrame"
)
df
text
uploader viewCount likeCount dislikeCount
0 Jeremih 139066569 812602 37842
1 TheKillersMusic 109313116 529361 11931
2 LetsGoMartin- Canciones Infantiles 104747788 236615 141467
3 Xiaoying Cuisine 54458335 1031525 37049
4 Adri 47404537 279033 36583
5 Diana and Roma IND 43829341 182334 148740
6 ChuChuTV Tamil 39244854 244614 213772
7 Cheez-It 35342270 108 27
8 Anime Uz 33375618 1270673 60013
9 RC Cars OFF Road 31952962 101503 49489
Let's say we then add an extra column to the DataFrame to compute the ratio of likes to dislikes.
We could write the following code:
python
df["likeDislikeRatio"] = df["likeCount"] / df["dislikeCount"]
Querying a Pandas dataframe {#querying-a-pandas-dataframe}
We can then query that DataFrame from chDB:
python
chdb.query(
"""
SELECT uploader, likeDislikeRatio
FROM Python(df)
""",
output_format="DataFrame"
) | {"source_file": "getting-started.md"} | [
0.03155962750315666,
-0.037538524717092514,
-0.06226588413119316,
0.027601394802331924,
0.05298832058906555,
0.021350182592868805,
0.05100204050540924,
0.00006870309152873233,
-0.029005801305174828,
-0.015039641410112381,
0.01624375209212303,
-0.049256931990385056,
0.058508362621068954,
-0... |
2efb1c89-65fa-40ef-8095-d56fac5473ee | We can then query that DataFrame from chDB:
python
chdb.query(
"""
SELECT uploader, likeDislikeRatio
FROM Python(df)
""",
output_format="DataFrame"
)
text
uploader likeDislikeRatio
0 Jeremih 21.473548
1 TheKillersMusic 44.368536
2 LetsGoMartin- Canciones Infantiles 1.672581
3 Xiaoying Cuisine 27.842182
4 Adri 7.627395
5 Diana and Roma IND 1.225857
6 ChuChuTV Tamil 1.144275
7 Cheez-It 4.000000
8 Anime Uz 21.173296
9 RC Cars OFF Road 2.051021
You can also read more about querying Pandas DataFrames in the
Querying Pandas developer guide
.
Next steps {#next-steps}
Hopefully, this guide has given you a good overview of chDB.
To learn more about how to use it, see the following developer guides:
Querying Pandas DataFrames
Querying Apache Arrow
Using chDB in JupySQL
Using chDB with an existing clickhouse-local database | {"source_file": "getting-started.md"} | [
0.044081732630729675,
-0.010059223510324955,
-0.07819997519254684,
-0.002210057806223631,
0.022014809772372246,
0.0685824453830719,
0.05146029591560364,
0.026156200096011162,
-0.04965384677052498,
-0.025441696867346764,
0.08846460282802582,
-0.03656702861189842,
0.01236291415989399,
-0.066... |
d1c3945d-daa8-41d7-b4ed-cdf2959b4e61 | title: 'chDB'
sidebar_label: 'Overview'
slug: /chdb
description: 'chDB is an in-process SQL OLAP Engine powered by ClickHouse'
keywords: ['chdb', 'embedded', 'clickhouse-lite', 'in-process', 'in process']
doc_type: 'guide'
import Image from '@theme/IdealImage';
import dfBench from '@site/static/images/chdb/df_bench.png';
chDB
chDB is a fast in-process SQL OLAP Engine powered by
ClickHouse
.
You can use it when you want to get the power of ClickHouse in a programming language without needing to connect to a ClickHouse server.
Key features {#key-features}
In-process SQL OLAP Engine
- Powered by ClickHouse, no need to install ClickHouse server
Multiple data formats
- Input & Output support for Parquet, CSV, JSON, Arrow, ORC and
70+ more formats
Minimized data copy
- From C++ to Python with
python memoryview
Rich Python Ecosystem Integration
- Native support for Pandas, Arrow, DB API 2.0, seamlessly fits into existing data science workflows
Zero dependencies
- No need for external database installations
What languages are supported by chDB? {#what-languages-are-supported-by-chdb}
chDB has the following language bindings:
Python
-
API Reference
Go
Rust
NodeJS
Bun
C and C++
How do I get started? {#how-do-i-get-started}
If you're using
Go
,
Rust
,
NodeJS
,
Bun
or
C and C++
, take a look at the corresponding language pages.
If you're using Python, see the
getting started developer guide
or the
chDB on-demand course
. There are also guides showing how to do common tasks like:
JupySQL
Querying Pandas
Querying Apache Arrow
Querying data in S3
Querying Parquet files
Querying remote ClickHouse
Using clickhouse-local database
An introductory video {#an-introductory-video}
You can listen to a brief project introduction to chDB, courtesy of Alexey Milovidov, the original creator of ClickHouse:
Performance benchmarks {#performance-benchmarks}
chDB delivers exceptional performance across different scenarios:
ClickBench of embedded engines
- Comprehensive performance comparison
DataFrame Processing Performance
- Comparative analysis with other DataFrame libraries
DataFrame Benchmark
About chDB {#about-chdb}
Read the full story about the birth of the chDB project on
blog
Read about chDB and its use cases on the
Blog
Take the
chDB on-demand course
Discover chDB in your browser using
codapi examples
More examples see (https://github.com/chdb-io/chdb/tree/main/examples)
License {#license}
chDB is available under the Apache License, Version 2.0. See
LICENSE
for more information. | {"source_file": "index.md"} | [
-0.018020113930106163,
-0.023476487025618553,
-0.10773926228284836,
0.06903098523616791,
-0.04293285310268402,
-0.03267791494727135,
0.05577000603079796,
0.044636789709329605,
-0.010177478194236755,
-0.043968960642814636,
-0.021401336416602135,
-0.041874464601278305,
0.07137562334537506,
-... |
73b9d852-c014-40ac-a386-eaef9a3ddd01 | description: 'Prerequisites and setup instructions for ClickHouse development'
sidebar_label: 'Prerequisites'
sidebar_position: 5
slug: /development/developer-instruction
title: 'Developer Prerequisites'
doc_type: 'guide'
Prerequisites
ClickHouse can be build on Linux, FreeBSD and macOS.
If you use Windows, you can still build ClickHouse in a virtual machine running Linux, e.g.
VirtualBox
with Ubuntu.
Create a Repository on GitHub {#create-a-repository-on-github}
To start developing for ClickHouse you will need a
GitHub
account.
Please also generate an SSH key locally (if you don't have one already) and upload the public key to GitHub as this is a prerequisite for contributing patches.
Next, fork the
ClickHouse repository
in your personal account by clicking the "fork" button in the upper right corner.
To contribute changes, e.g., a fix for an issue or a feature, first commit your changes to a branch in your fork, then create a "Pull Request" with the changes to the main repository.
For working with Git repositories, please install Git. For example, in Ubuntu, run:
sh
sudo apt update
sudo apt install git
A Git cheatsheet can be found
here
.
A detailed Git manual is
here
.
Clone the repository to your development machine {#clone-the-repository-to-your-development-machine}
First, download the source files to your working machine, i.e. clone the repository:
sh
git clone git@github.com:your_github_username/ClickHouse.git # replace the placeholder with your GitHub user name
cd ClickHouse
This command creates a directory
ClickHouse/
containing the source code, tests, and other files.
You can specify a custom directory for checkout after the URL, but it is important that this path does not contain whitespaces as this may break the build later on.
ClickHouse's Git repository uses submodules to pull in 3rd party libraries.
Submodules are not checked out by default.
You can either
run
git clone
with option
--recurse-submodules
,
if
git clone
is run without
--recurse-submodules
, run
git submodule update --init --jobs <N>
to checkout all submodules explicitly. (
<N>
can be set for example to
12
to parallelize the download.)
if
git clone
is run without
--recurse-submodules
and you like to use
sparse
and
shallow
submodule checkout to omit unneeded files and history in submodules to save space (ca. 5 GB instead of ca. 15 GB), run
./contrib/update-submodules.sh
. This alternative is used by CI but not recommended for local development as it makes working with submodules less convenient and slower.
To check the status of the Git submodules, run
git submodule status
.
If you get the following error message
```bash
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
``` | {"source_file": "developer-instruction.md"} | [
-0.0017708163941279054,
-0.12197527289390564,
0.014264185912907124,
-0.05211549624800682,
-0.03634567931294441,
-0.039356522262096405,
-0.03393393009901047,
-0.013976098038256168,
-0.08442443609237671,
0.023507682606577873,
0.020240388810634613,
-0.0445597767829895,
0.0036806913558393717,
... |
469ebf2a-9858-4bff-a68e-dcd5ca3896ba | ```bash
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
```
the SSH keys for connecting to GitHub are missing.
These keys are normally located in
~/.ssh
.
For SSH keys to be accepted you need to upload them in GitHub's settings.
You can also clone the repository via HTTPS:
sh
git clone https://github.com/ClickHouse/ClickHouse.git
This, however, will not let you send your changes to the server.
You can still use it temporarily and add the SSH keys later replacing the remote address of the repository with
git remote
command.
You can also add original ClickHouse repo address to your local repository to pull updates from there:
sh
git remote add upstream git@github.com:ClickHouse/ClickHouse.git
After successfully running this command you will be able to pull updates from the main ClickHouse repo by running
git pull upstream master
.
:::tip
Please do not use verbatim
git push
, you may push to the wrong remote and/or the wrong branch.
It is better to specify the remote and branch names explicitly, e.g.
git push origin my_branch_name
.
:::
Writing code {#writing-code}
Below you can find some quick links which may be useful when writing code for ClickHouse:
ClickHouse Architecture
.
Code style guide
.
Third-party libraries
Writing tests
Open issues
IDE {#ide}
Visual Studio Code
and
Neovim
are two options that have worked well in the past for developing ClickHouse. If you are using VS Code, we recommend using the
clangd extension
to replace IntelliSense as it is much more performant.
CLion
is another great alternative. However, it can be slower on larger projects like ClickHouse. A few things to keep in mind when using CLion:
CLion creates a
build
path on its own and automatically selects
debug
for the build type
It uses a version of CMake that is defined in CLion and not the one installed by you
CLion will use
make
to run build tasks instead of
ninja
(this is normal behavior)
Other IDEs you can use are
Sublime Text
,
Qt Creator
, or
Kate
.
Create a pull request {#create-a-pull-request}
Navigate to your fork repository in GitHub's UI.
If you have been developing in a branch, you need to select that branch.
There will be a "Pull request" button located on the screen.
In essence, this means "create a request for accepting my changes into the main repository".
A pull request can be created even if the work is not completed yet.
In this case please put the word "WIP" (work in progress) at the beginning of the title, it can be changed later.
This is useful for cooperative reviewing and discussion of changes as well as for running all of the available tests.
It is important that you provide a brief description of your changes, it will later be used for generating release changelog. | {"source_file": "developer-instruction.md"} | [
0.02645208314061165,
-0.08571609109640121,
-0.030989790335297585,
-0.030255503952503204,
-0.021900998428463936,
-0.016923122107982635,
-0.03854626044631004,
0.018780218437314034,
0.0007202409324236214,
0.06954796612262726,
0.03507024422287941,
-0.0009893635287880898,
0.043851226568222046,
... |
7673a780-5b2e-4455-ba6b-3fc0a997ac90 | Testing will commence as soon as ClickHouse employees label your PR with a tag "can be tested".
The results of some first checks (e.g. code style) will come in within several minutes.
Build check results will arrive within half an hour.
The main set of tests will report itself within an hour.
The system will prepare ClickHouse binary builds for your pull request individually.
To retrieve these builds click the "Details" link next to "Builds" entry in the list of checks.
There you will find direct links to the built .deb packages of ClickHouse which you can deploy even on your production servers (if you have no fear).
Write documentation {#write-documentation}
Every pull request which adds a new feature must come with proper documentation.
If you'd like to preview your documentation changes the instructions for how to build the documentation page locally are available in the README.md file
here
.
When adding a new function to ClickHouse you can use the template below as a guide:
```markdown
newFunctionName
A short description of the function goes here. It should describe briefly what it does and a typical usage case.
Syntax
```sql
newFunctionName(arg1, arg2[, arg3])
```
Arguments
arg1
— Description of the argument.
DataType
arg2
— Description of the argument.
DataType
arg3
— Description of optional argument (optional).
DataType
Implementation Details
A description of implementation details if relevant.
Returned value
Returns {insert what the function returns here}.
DataType
Example
Query:
```sql
SELECT 'write your example query here';
```
Response:
```response
┌───────────────────────────────────┐
│ the result of the query │
└───────────────────────────────────┘
```
```
Using test data {#using-test-data}
Developing ClickHouse often requires loading realistic datasets.
This is particularly important for performance testing.
We have a specially prepared set of anonymized data of web analytics.
It requires additionally some 3GB of free disk space.
```sh
sudo apt install wget xz-utils
wget https://datasets.clickhouse.com/hits/tsv/hits_v1.tsv.xz
wget https://datasets.clickhouse.com/visits/tsv/visits_v1.tsv.xz
xz -v -d hits_v1.tsv.xz
xz -v -d visits_v1.tsv.xz
clickhouse-client
```
In clickhouse-client:
```sql
CREATE DATABASE IF NOT EXISTS test; | {"source_file": "developer-instruction.md"} | [
-0.026516154408454895,
-0.05559349060058594,
-0.029581358656287193,
0.019964510574936867,
-0.00293448637239635,
-0.058368097990751266,
-0.062231287360191345,
-0.0012545230565592647,
-0.0660807266831398,
0.018475143238902092,
0.028911778703331947,
-0.007565520238131285,
-0.013599732890725136,... |
5419573e-3253-4820-bb09-2457af55dc4b | CREATE TABLE test.hits ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String,
ParsedParams.Key1
Array(String),
ParsedParams.Key2
Array(String),
ParsedParams.Key3
Array(String),
ParsedParams.Key4
Array(String),
ParsedParams.Key5
Array(String),
ParsedParams.ValueDouble | {"source_file": "developer-instruction.md"} | [
0.02527828887104988,
0.019780337810516357,
-0.0887729674577713,
0.018673306331038475,
-0.09434808790683746,
0.03730236738920212,
0.010577721521258354,
0.0072205448523163795,
-0.003948391415178776,
0.06516288965940475,
0.0032250413205474615,
-0.07607925683259964,
0.04183816909790039,
-0.049... |
7d8730f8-586c-4408-ab16-3248bb9059f8 | Array(String),
ParsedParams.Key2
Array(String),
ParsedParams.Key3
Array(String),
ParsedParams.Key4
Array(String),
ParsedParams.Key5
Array(String),
ParsedParams.ValueDouble
Array(Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree PARTITION BY toYYYYMM(EventDate) SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID), EventTime); | {"source_file": "developer-instruction.md"} | [
0.08288820087909698,
0.02565336413681507,
-0.014367245137691498,
-0.062073227018117905,
-0.1251974105834961,
-0.03519187122583389,
0.014697871170938015,
0.05842030420899391,
-0.017932560294866562,
-0.017802512273192406,
0.0007683751173317432,
-0.037322405725717545,
-0.0013613230548799038,
... |
bb2ac31f-d793-4830-ae1f-bda6755afc0f | CREATE TABLE test.visits ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String),
Goals.ID
Array(UInt32),
Goals.Serial
Array(UInt32),
Goals.EventTime
Array(DateTime),
Goals.Price
Array(Int64),
Goals.OrderID
Array(String),
Goals.CurrencyID
Array(UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, | {"source_file": "developer-instruction.md"} | [
0.032418325543403625,
0.011424197815358639,
-0.09586574137210846,
0.025934237986803055,
-0.09045134484767914,
0.04900635778903961,
0.0315408781170845,
-0.019887860864400864,
0.004763551987707615,
0.04656326770782471,
0.025517519563436508,
-0.09620747715234756,
0.07519669830799103,
-0.04094... |
fe4214b3-e6df-4c65-895c-f17e6eadd637 | TraficSource.ID
Array(Int8),
TraficSource.SearchEngineID
Array(UInt16),
TraficSource.AdvEngineID
Array(UInt8),
TraficSource.PlaceID
Array(UInt16),
TraficSource.SocialSourceNetworkID
Array(UInt8),
TraficSource.Domain
Array(String),
TraficSource.SearchPhrase
Array(String),
TraficSource.SocialSourcePage
Array(String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32,
ParsedParams.Key1
Array(String),
ParsedParams.Key2
Array(String),
ParsedParams.Key3
Array(String),
ParsedParams.Key4
Array(String),
ParsedParams.Key5
Array(String),
ParsedParams.ValueDouble
Array(Float64),
Market.Type
Array(UInt8),
Market.GoalID
Array(UInt32),
Market.OrderID
Array(String),
Market.OrderPrice
Array(Int64),
Market.PP
Array(UInt32),
Market.DirectPlaceID
Array(UInt32),
Market.DirectOrderID
Array(UInt32),
Market.DirectBannerID
Array(UInt32),
Market.GoodID
Array(String),
Market.GoodName
Array(String),
Market.GoodQuantity
Array(Int32),
Market.GoodPrice
Array(Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) SAMPLE BY intHash32(UserID) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID); | {"source_file": "developer-instruction.md"} | [
-0.014496633782982826,
-0.01662614196538925,
-0.1164105087518692,
-0.065900057554245,
-0.04053955897688866,
-0.037678007036447525,
0.06147255375981331,
-0.037710465490818024,
0.004669590387493372,
-0.01599595881998539,
0.03793356195092201,
0.024503376334905624,
0.08100780099630356,
-0.0824... |
cb253492-c043-4288-9e57-e55015cda52b | ```
Import the data:
bash
clickhouse-client --max_insert_block_size 100000 --query "INSERT INTO test.hits FORMAT TSV" < hits_v1.tsv
clickhouse-client --max_insert_block_size 100000 --query "INSERT INTO test.visits FORMAT TSV" < visits_v1.tsv | {"source_file": "developer-instruction.md"} | [
0.05298696830868721,
-0.010659190826117992,
-0.08319796621799469,
0.0325821116566658,
-0.062012072652578354,
-0.031120985746383667,
0.008111447095870972,
0.03038724698126316,
-0.07096698135137558,
0.040135934948921204,
-0.008894030936062336,
-0.00868652667850256,
0.04459051415324211,
-0.02... |
b448b70d-a936-4891-a360-dd267255417d | description: 'A comprehensive overview of ClickHouse architecture and its column-oriented
design'
sidebar_label: 'Architecture Overview'
sidebar_position: 50
slug: /development/architecture
title: 'Architecture Overview'
doc_type: 'reference'
Architecture Overview
ClickHouse is a true column-oriented DBMS. Data is stored by columns, and during the execution of arrays (vectors or chunks of columns).
Whenever possible, operations are dispatched on arrays, rather than on individual values.
It is called "vectorized query execution" and it helps lower the cost of actual data processing.
This idea is not new.
It dates back to the
APL
(A programming language, 1957) and its descendants:
A +
(APL dialect),
J
(1990),
K
(1993), and
Q
(programming language from Kx Systems, 2003).
Array programming is used in scientific data processing. Neither is this idea something new in relational databases. For example, it is used in the
VectorWise
system (also known as Actian Vector Analytic Database by Actian Corporation).
There are two different approaches for speeding up query processing: vectorized query execution and runtime code generation. The latter removes all indirection and dynamic dispatch. Neither of these approaches is strictly better than the other. Runtime code generation can be better when it fuses many operations, thus fully utilizing CPU execution units and the pipeline. Vectorized query execution can be less practical because it involves temporary vectors that must be written to the cache and read back. If the temporary data does not fit in the L2 cache, this becomes an issue. But vectorized query execution more easily utilizes the SIMD capabilities of the CPU. A
research paper
written by our friends shows that it is better to combine both approaches. ClickHouse uses vectorized query execution and has limited initial support for runtime code generation.
Columns {#columns}
IColumn
interface is used to represent columns in memory (actually, chunks of columns). This interface provides helper methods for the implementation of various relational operators. Almost all operations are immutable: they do not modify the original column, but create a new modified one. For example, the
IColumn :: filter
method accepts a filter byte mask. It is used for the
WHERE
and
HAVING
relational operators. Additional examples: the
IColumn :: permute
method to support
ORDER BY
, the
IColumn :: cut
method to support
LIMIT
. | {"source_file": "architecture.md"} | [
-0.04007166251540184,
0.024023879319429398,
-0.12111201137304306,
-0.001238862401805818,
-0.11060724407434464,
-0.09049022942781448,
0.035999562591314316,
-0.019171904772520065,
0.004017774481326342,
0.06272923201322556,
-0.037581559270620346,
0.09675522893667221,
0.06273331493139267,
-0.0... |
5ff2247a-43d0-4688-8b75-b13347af655e | Various
IColumn
implementations (
ColumnUInt8
,
ColumnString
, and so on) are responsible for the memory layout of columns. The memory layout is usually a contiguous array. For the integer type of columns, it is just one contiguous array, like
std :: vector
. For
String
and
Array
columns, it is two vectors: one for all array elements, placed contiguously, and a second one for offsets to the beginning of each array. There is also
ColumnConst
that stores just one value in memory, but looks like a column.
Field {#field}
Nevertheless, it is possible to work with individual values as well. To represent an individual value, the
Field
is used.
Field
is just a discriminated union of
UInt64
,
Int64
,
Float64
,
String
and
Array
.
IColumn
has the
operator []
method to get the n-th value as a
Field
, and the
insert
method to append a
Field
to the end of a column. These methods are not very efficient, because they require dealing with temporary
Field
objects representing an individual value. There are more efficient methods, such as
insertFrom
,
insertRangeFrom
, and so on.
Field
does not have enough information about a specific data type for a table. For example,
UInt8
,
UInt16
,
UInt32
, and
UInt64
are all represented as
UInt64
in a
Field
.
Leaky abstractions {#leaky-abstractions}
IColumn
has methods for common relational transformations of data, but they do not meet all needs. For example,
ColumnUInt64
does not have a method to calculate the sum of two columns, and
ColumnString
does not have a method to run a substring search. These countless routines are implemented outside of
IColumn
.
Various functions on columns can be implemented in a generic, non-efficient way using
IColumn
methods to extract
Field
values, or in a specialized way using knowledge of inner memory layout of data in a specific
IColumn
implementation. It is implemented by casting functions to a specific
IColumn
type and deal with internal representation directly. For example,
ColumnUInt64
has the
getData
method that returns a reference to an internal array, then a separate routine reads or fills that array directly. We have "leaky abstractions" to allow efficient specializations of various routines.
Data types {#data_types}
IDataType
is responsible for serialization and deserialization: for reading and writing chunks of columns or individual values in binary or text form.
IDataType
directly corresponds to data types in tables. For example, there are
DataTypeUInt32
,
DataTypeDateTime
,
DataTypeString
and so on. | {"source_file": "architecture.md"} | [
0.05478396639227867,
0.04315495491027832,
-0.13952787220478058,
-0.006617719307541847,
-0.022993052378296852,
-0.003861827775835991,
0.027002470567822456,
0.0652974396944046,
0.11706912517547607,
-0.04623014107346535,
-0.02117055281996727,
0.05654482543468475,
-0.005665590055286884,
-0.062... |
f222873c-903c-4a4a-afb1-a83652c3b36b | IDataType
and
IColumn
are only loosely related to each other. Different data types can be represented in memory by the same
IColumn
implementations. For example,
DataTypeUInt32
and
DataTypeDateTime
are both represented by
ColumnUInt32
or
ColumnConstUInt32
. In addition, the same data type can be represented by different
IColumn
implementations. For example,
DataTypeUInt8
can be represented by
ColumnUInt8
or
ColumnConstUInt8
.
IDataType
only stores metadata. For instance,
DataTypeUInt8
does not store anything at all (except virtual pointer
vptr
) and
DataTypeFixedString
stores just
N
(the size of fixed-size strings).
IDataType
has helper methods for various data formats. Examples are methods to serialize a value with possible quoting, to serialize a value for JSON, and to serialize a value as part of the XML format. There is no direct correspondence to data formats. For example, the different data formats
Pretty
and
TabSeparated
can use the same
serializeTextEscaped
helper method from the
IDataType
interface.
Block {#block}
A
Block
is a container that represents a subset (chunk) of a table in memory. It is just a set of triples:
(IColumn, IDataType, column name)
. During query execution, data is processed by
Block
s. If we have a
Block
, we have data (in the
IColumn
object), we have information about its type (in
IDataType
) that tells us how to deal with that column, and we have the column name. It could be either the original column name from the table or some artificial name assigned for getting temporary results of calculations.
When we calculate some function over columns in a block, we add another column with its result to the block, and we do not touch columns for arguments of the function because operations are immutable. Later, unneeded columns can be removed from the block, but not modified. It is convenient for the elimination of common subexpressions.
Blocks are created for every processed chunk of data. Note that for the same type of calculation, the column names and types remain the same for different blocks, and only column data changes. It is better to split block data from the block header because small block sizes have a high overhead of temporary strings for copying shared_ptrs and column names.
Processors {#processors}
See the description at
https://github.com/ClickHouse/ClickHouse/blob/master/src/Processors/IProcessor.h
.
Formats {#formats}
Data formats are implemented with processors.
I/O {#io}
For byte-oriented input/output, there are
ReadBuffer
and
WriteBuffer
abstract classes. They are used instead of C++
iostream
s. Don't worry: every mature C++ project is using something other than
iostream
s for good reasons. | {"source_file": "architecture.md"} | [
-0.006113240960985422,
-0.047675129026174545,
-0.08535030484199524,
-0.0000607240290264599,
-0.028734931722283363,
0.008011197671294212,
0.011908944696187973,
0.12139344960451126,
0.0036992663517594337,
-0.04631570354104042,
0.009524636901915073,
-0.02481982670724392,
-0.009466571733355522,
... |
50d0e7df-d59a-4d88-81c8-d2aaffa4334c | ReadBuffer
and
WriteBuffer
are just a contiguous buffer and a cursor pointing to the position in that buffer. Implementations may own or not own the memory for the buffer. There is a virtual method to fill the buffer with the following data (for
ReadBuffer
) or to flush the buffer somewhere (for
WriteBuffer
). The virtual methods are rarely called.
Implementations of
ReadBuffer
/
WriteBuffer
are used for working with files and file descriptors and network sockets, for implementing compression (
CompressedWriteBuffer
is initialized with another WriteBuffer and performs compression before writing data to it), and for other purposes – the names
ConcatReadBuffer
,
LimitReadBuffer
, and
HashingWriteBuffer
speak for themselves.
Read/WriteBuffers only deal with bytes. There are functions from
ReadHelpers
and
WriteHelpers
header files to help with formatting input/output. For example, there are helpers to write a number in decimal format.
Let's examine what happens when you want to write a result set in
JSON
format to stdout.
You have a result set ready to be fetched from a pulling
QueryPipeline
.
First, you create a
WriteBufferFromFileDescriptor(STDOUT_FILENO)
to write bytes to stdout.
Next, you connect the result from the query pipeline to
JSONRowOutputFormat
, which is initialized with that
WriteBuffer
, to write rows in
JSON
format to stdout.
This can be done via the
complete
method, which turns a pulling
QueryPipeline
into a completed
QueryPipeline
.
Internally,
JSONRowOutputFormat
will write various JSON delimiters and call the
IDataType::serializeTextJSON
method with a reference to
IColumn
and the row number as arguments. Consequently,
IDataType::serializeTextJSON
will call a method from
WriteHelpers.h
: for example,
writeText
for numeric types and
writeJSONString
for
DataTypeString
.
Tables {#tables}
The
IStorage
interface represents tables. Different implementations of that interface are different table engines. Examples are
StorageMergeTree
,
StorageMemory
, and so on. Instances of these classes are just tables.
The key methods in
IStorage
are
read
and
write
, along with others such as
alter
,
rename
, and
drop
. The
read
method accepts the following arguments: a set of columns to read from a table, the
AST
query to consider, and the desired number of streams. It returns a
Pipe
.
In most cases, the read method is responsible only for reading the specified columns from a table, not for any further data processing.
All subsequent data processing is handled by another part of the pipeline, which falls outside the responsibility of
IStorage
.
But there are notable exceptions:
The AST query is passed to the
read
method, and the table engine can use it to derive index usage and to read fewer data from a table. | {"source_file": "architecture.md"} | [
0.0331181064248085,
-0.04654368758201599,
-0.03191705420613289,
-0.0392405167222023,
-0.13101257383823395,
-0.04333348944783211,
-0.006679816637188196,
0.04314245656132698,
0.022227773442864418,
0.02417006529867649,
-0.061602331697940826,
0.04383515566587448,
0.0035240240395069122,
-0.0717... |
da1746d5-62b9-4949-b244-1e90ac08b244 | But there are notable exceptions:
The AST query is passed to the
read
method, and the table engine can use it to derive index usage and to read fewer data from a table.
Sometimes the table engine can process data itself to a specific stage. For example,
StorageDistributed
can send a query to remote servers, ask them to process data to a stage where data from different remote servers can be merged, and return that preprocessed data. The query interpreter then finishes processing the data.
The table's
read
method can return a
Pipe
consisting of multiple
Processors
. These
Processors
can read from a table in parallel.
Then, you can connect these processors with various other transformations (such as expression evaluation or filtering), which can be calculated independently.
And then, create a
QueryPipeline
on top of them, and execute it via
PipelineExecutor
.
There are also
TableFunction
s. These are functions that return a temporary
IStorage
object to use in the
FROM
clause of a query.
To get a quick idea of how to implement your table engine, look at something simple, like
StorageMemory
or
StorageTinyLog
.
As the result of the
read
method,
IStorage
returns
QueryProcessingStage
– information about what parts of the query were already calculated inside storage.
Parsers {#parsers}
A hand-written recursive descent parser parses a query. For example,
ParserSelectQuery
just recursively calls the underlying parsers for various parts of the query. Parsers create an
AST
. The
AST
is represented by nodes, which are instances of
IAST
.
Parser generators are not used for historical reasons.
Interpreters {#interpreters}
Interpreters are responsible for creating the query execution pipeline from an AST. There are simple interpreters, such as
InterpreterExistsQuery
and
InterpreterDropQuery
, as well as the more sophisticated
InterpreterSelectQuery
.
The query execution pipeline is a combination of processors that can consume and produce chunks (sets of columns with specific types).
A processor communicates via ports and can have multiple input ports and multiple output ports.
A more detailed description can be found in
src/Processors/IProcessor.h
.
For example, the result of interpreting the
SELECT
query is a "pulling"
QueryPipeline
which has a special output port to read the result set from.
The result of the
INSERT
query is a "pushing"
QueryPipeline
with an input port to write data for insertion.
And the result of interpreting the
INSERT SELECT
query is a "completed"
QueryPipeline
that has no inputs or outputs but copies data from
SELECT
to
INSERT
simultaneously. | {"source_file": "architecture.md"} | [
-0.02434767410159111,
-0.0052282619290053844,
-0.08105754107236862,
0.10721056908369064,
-0.12320663779973984,
-0.07085481286048889,
-0.0009253286989405751,
0.09310442209243774,
0.005665278527885675,
0.013730326667428017,
-0.05582623556256294,
-0.02220972254872322,
0.011900515295565128,
-0... |
0700f9e7-f479-4bc9-b1e0-0752b89dee63 | InterpreterSelectQuery
uses
ExpressionAnalyzer
and
ExpressionActions
machinery for query analysis and transformations. This is where most rule-based query optimizations are performed.
ExpressionAnalyzer
is quite messy and should be rewritten: various query transformations and optimizations should be extracted into separate classes to allow for modular transformations of the query.
To address problems that exist in interpreters, a new
InterpreterSelectQueryAnalyzer
has been developed. This is a new version of the
InterpreterSelectQuery
, which does not use the
ExpressionAnalyzer
and introduces an additional layer of abstraction between
AST
and
QueryPipeline
, called
QueryTree
. It is fully ready for use in production, but just in case it can be turned off by setting the value of the
enable_analyzer
setting to
false
.
Functions {#functions}
There are ordinary functions and aggregate functions. For aggregate functions, see the next section.
Ordinary functions do not change the number of rows – they work as if they are processing each row independently. In fact, functions are not called for individual rows, but for
Block
's of data to implement vectorized query execution.
There are some miscellaneous functions, like
blockSize
,
rowNumberInBlock
, and
runningAccumulate
, that exploit block processing and violate the independence of rows.
ClickHouse has strong typing, so there's no implicit type conversion. If a function does not support a specific combination of types, it throws an exception. But functions can work (be overloaded) for many different combinations of types. For example, the
plus
function (to implement the
+
operator) works for any combination of numeric types:
UInt8
+
Float32
,
UInt16
+
Int8
, and so on. Also, some variadic functions can accept any number of arguments, such as the
concat
function.
Implementing a function may be slightly inconvenient because a function explicitly dispatches supported data types and supported
IColumns
. For example, the
plus
function has code generated by instantiation of a C++ template for each combination of numeric types, and constant or non-constant left and right arguments.
It is an excellent place to implement runtime code generation to avoid template code bloat. Also, it makes it possible to add fused functions like fused multiply-add or to make multiple comparisons in one loop iteration.
Due to vectorized query execution, functions are not short-circuited. For example, if you write
WHERE f(x) AND g(y)
, both sides are calculated, even for rows, when
f(x)
is zero (except when
f(x)
is a zero constant expression). But if the selectivity of the
f(x)
condition is high, and calculation of
f(x)
is much cheaper than
g(y)
, it's better to implement multi-pass calculation. It would first calculate
f(x)
, then filter columns by the result, and then calculate
g(y)
only for smaller, filtered chunks of data. | {"source_file": "architecture.md"} | [
-0.09099175035953522,
0.0323396734893322,
-0.03195086121559143,
0.04634148254990578,
-0.09030671417713165,
-0.07043353468179703,
0.05955624580383301,
0.03212295100092888,
-0.03585195168852806,
-0.02336803637444973,
-0.09471061080694199,
-0.034351758658885956,
0.009594024159014225,
-0.06147... |
5d72e919-b2a1-47b8-9399-bda487d4b422 | Aggregate functions {#aggregate-functions}
Aggregate functions are stateful functions. They accumulate passed values into some state and allow you to get results from that state. They are managed with the
IAggregateFunction
interface. States can be rather simple (the state for
AggregateFunctionCount
is just a single
UInt64
value) or quite complex (the state of
AggregateFunctionUniqCombined
is a combination of a linear array, a hash table, and a
HyperLogLog
probabilistic data structure).
States are allocated in
Arena
(a memory pool) to deal with multiple states while executing a high-cardinality
GROUP BY
query. States can have a non-trivial constructor and destructor: for example, complicated aggregation states can allocate additional memory themselves. It requires some attention to creating and destroying states and properly passing their ownership and destruction order.
Aggregation states can be serialized and deserialized to pass over the network during distributed query execution or to write them on the disk where there is not enough RAM. They can even be stored in a table with the
DataTypeAggregateFunction
to allow incremental aggregation of data.
The serialized data format for aggregate function states is not versioned right now. It is ok if aggregate states are only stored temporarily. But we have the
AggregatingMergeTree
table engine for incremental aggregation, and people are already using it in production. It is the reason why backward compatibility is required when changing the serialized format for any aggregate function in the future.
Server {#server}
The server implements several different interfaces:
An HTTP interface for any foreign clients.
A TCP interface for the native ClickHouse client and for cross-server communication during distributed query execution.
An interface for transferring data for replication.
Internally, it is just a primitive multithread server without coroutines or fibers. Since the server is not designed to process a high rate of simple queries but to process a relatively low rate of complex queries, each of them can process a vast amount of data for analytics.
The server initializes the
Context
class with the necessary environment for query execution: the list of available databases, users and access rights, settings, clusters, the process list, the query log, and so on. Interpreters use this environment.
We maintain full backward and forward compatibility for the server TCP protocol: old clients can talk to new servers, and new clients can talk to old servers. But we do not want to maintain it eternally, and we are removing support for old versions after about one year. | {"source_file": "architecture.md"} | [
-0.010318459011614323,
-0.007157523650676012,
-0.000003907877726305742,
0.04439013451337814,
-0.05586478114128113,
-0.0073826913721859455,
-0.036244362592697144,
0.024814363569021225,
0.07253295183181763,
0.05717862397432327,
-0.0826323926448822,
0.04158183932304382,
0.023656005039811134,
... |
3e34ca8f-c29b-4982-906c-c554c7749ce4 | :::note
For most external applications, we recommend using the HTTP interface because it is simple and easy to use. The TCP protocol is more tightly linked to internal data structures: it uses an internal format for passing blocks of data, and it uses custom framing for compressed data. We haven't released a C library for that protocol because it requires linking most of the ClickHouse codebase, which is not practical.
:::
Configuration {#configuration}
ClickHouse Server is based on POCO C++ Libraries and uses
Poco::Util::AbstractConfiguration
to represent its configuration. Configuration is held by
Poco::Util::ServerApplication
class inherited by
DaemonBase
class, which in turn is inherited by
DB::Server
class, implementing clickhouse-server itself. So config can be accessed by
ServerApplication::config()
method.
Config is read from multiple files (in XML or YAML format) and merged into single
AbstractConfiguration
by
ConfigProcessor
class. Configuration is loaded at server startup and can be reloaded later if one of config files is updated, removed or added.
ConfigReloader
class is responsible for periodic monitoring of these changes and reload procedure as well.
SYSTEM RELOAD CONFIG
query also triggers config to be reloaded.
For queries and subsystems other than
Server
config is accessible using
Context::getConfigRef()
method. Every subsystem that is capable of reloading its config without server restart should register itself in reload callback in
Server::main()
method. Note that if newer config has an error, most subsystems will ignore new config, log warning messages and keep working with previously loaded config. Due to the nature of
AbstractConfiguration
it is not possible to pass reference to specific section, so
String config_prefix
is usually used instead.
Threads and jobs {#threads-and-jobs}
To execute queries and do side activities ClickHouse allocates threads from one of thread pools to avoid frequent thread creation and destruction. There are a few thread pools, which are selected depending on a purpose and structure of a job:
* Server pool for incoming client sessions.
* Global thread pool for general purpose jobs, background activities and standalone threads.
* IO thread pool for jobs that are mostly blocked on some IO and are not CPU-intensive.
* Background pools for periodic tasks.
* Pools for preemptable tasks that can be split into steps.
Server pool is a
Poco::ThreadPool
class instance defined in
Server::main()
method. It can have at most
max_connection
threads. Every thread is dedicated to a single active connection. | {"source_file": "architecture.md"} | [
-0.12192951887845993,
0.02980588562786579,
-0.09498189389705658,
-0.034837983548641205,
-0.037369322031736374,
-0.053776755928993225,
0.049847718328237534,
0.051231302320957184,
0.006061209365725517,
-0.011551757343113422,
0.016121137887239456,
0.025878427550196648,
0.021971669048070908,
-... |
9f147eaa-d0ba-46f8-9d34-1ef907203c55 | Server pool is a
Poco::ThreadPool
class instance defined in
Server::main()
method. It can have at most
max_connection
threads. Every thread is dedicated to a single active connection.
Global thread pool is
GlobalThreadPool
singleton class. To allocate thread from it
ThreadFromGlobalPool
is used. It has an interface similar to
std::thread
, but pulls thread from the global pool and does all necessary initialization. It is configured with the following settings:
*
max_thread_pool_size
- limit on thread count in pool.
*
max_thread_pool_free_size
- limit on idle thread count waiting for new jobs.
*
thread_pool_queue_size
- limit on scheduled job count.
Global pool is universal and all pools described below are implemented on top of it. This can be thought of as a hierarchy of pools. Any specialized pool takes its threads from the global pool using
ThreadPool
class. So the main purpose of any specialized pool is to apply limit on the number of simultaneous jobs and do job scheduling. If there are more jobs scheduled than threads in a pool,
ThreadPool
accumulates jobs in a queue with priorities. Each job has an integer priority. Default priority is zero. All jobs with higher priority values are started before any job with lower priority value. But there is no difference between already executing jobs, thus priority matters only when the pool in overloaded.
IO thread pool is implemented as a plain
ThreadPool
accessible via
IOThreadPool::get()
method. It is configured in the same way as global pool with
max_io_thread_pool_size
,
max_io_thread_pool_free_size
and
io_thread_pool_queue_size
settings. The main purpose of IO thread pool is to avoid exhaustion of the global pool with IO jobs, which could prevent queries from fully utilizing CPU. Backup to S3 does significant amount of IO operations and to avoid impact on interactive queries there is a separate
BackupsIOThreadPool
configured with
max_backups_io_thread_pool_size
,
max_backups_io_thread_pool_free_size
and
backups_io_thread_pool_queue_size
settings.
For periodic task execution there is
BackgroundSchedulePool
class. You can register tasks using
BackgroundSchedulePool::TaskHolder
objects and the pool ensures that no task runs two jobs at the same time. It also allows you to postpone task execution to a specific instant in the future or temporarily deactivate task. Global
Context
provides a few instances of this class for different purposes. For general purpose tasks
Context::getSchedulePool()
is used. | {"source_file": "architecture.md"} | [
-0.09620361775159836,
-0.04096673056483269,
-0.007643717806786299,
-0.009178269654512405,
-0.015119923278689384,
-0.018104378134012222,
0.039780888706445694,
-0.07740692794322968,
0.0381256528198719,
-0.030082490295171738,
-0.038861483335494995,
0.009001522324979305,
-0.043771255761384964,
... |
c67988b9-caa7-41ef-ad86-b0a41e665276 | There are also specialized thread pools for preemptable tasks. Such
IExecutableTask
task can be split into ordered sequence of jobs, called steps. To schedule these tasks in a manner allowing short tasks to be prioritized over long ones
MergeTreeBackgroundExecutor
is used. As name suggests it is used for background MergeTree related operations such as merges, mutations, fetches and moves. Pool instances are available using
Context::getCommonExecutor()
and other similar methods.
No matter what pool is used for a job, at start
ThreadStatus
instance is created for this job. It encapsulates all per-thread information: thread id, query id, performance counters, resource consumption and many other useful data. Job can access it via thread local pointer by
CurrentThread::get()
call, so we do not need to pass it to every function.
If thread is related to query execution, then the most important thing attached to
ThreadStatus
is query context
ContextPtr
. Every query has its master thread in the server pool. Master thread does the attachment by holding an
ThreadStatus::QueryScope query_scope(query_context)
object. Master thread also creates a thread group represented with
ThreadGroupStatus
object. Every additional thread that is allocated during this query execution is attached to its thread group by
CurrentThread::attachTo(thread_group)
call. Thread groups are used to aggregate profile event counters and track memory consumption by all threads dedicated to a single task (see
MemoryTracker
and
ProfileEvents::Counters
classes for more information).
Concurrency control {#concurrency-control}
Query that can be parallelized uses
max_threads
setting to limit itself. Default value for this setting is selected in a way that allows single query to utilize all CPU cores in the best way. But what if there are multiple concurrent queries and each of them uses default
max_threads
setting value? Then queries will share CPU resources. OS will ensure fairness by constantly switching threads, which introduce some performance penalty.
ConcurrencyControl
helps to deal with this penalty and avoid allocating a lot of threads. Configuration setting
concurrent_threads_soft_limit_num
is used to limit how many concurrent thread can be allocated before applying some kind of CPU pressure.
Notion of CPU
slot
is introduced. Slot is a unit of concurrency: to run a thread query has to acquire a slot in advance and release it when thread stops. The number of slots is globally limited in a server. Multiple concurrent queries are competing for CPU slots if the total demand exceeds the total number of slots.
ConcurrencyControl
is responsible to resolve this competition by doing CPU slot scheduling in a fair manner. | {"source_file": "architecture.md"} | [
-0.08018771559000015,
0.0013650046894326806,
-0.04456370696425438,
0.017697101458907127,
-0.044370971620082855,
-0.03391461819410324,
0.01719621755182743,
-0.02577122300863266,
0.061401739716529846,
0.04197458177804947,
-0.05020776018500328,
-0.006108998320996761,
-0.0325225368142128,
-0.0... |
4144ec47-61de-4192-9658-b18abb415fbe | Each slot can be seen as an independent state machine with the following states:
*
free
: slot is available to be allocated by any query.
*
granted
: slot is
allocated
by specific query, but not yet acquired by any thread.
*
acquired
: slot is
allocated
by specific query and acquired by a thread.
Note that
allocated
slot can be in two different states:
granted
and
acquired
. The former is a transitional state, that actually should be short (from the instant when a slot is allocated to a query till the moment when the up-scaling procedure is run by any thread of that query).
mermaid
stateDiagram-v2
direction LR
[*] --> free
free --> allocated: allocate
state allocated {
direction LR
[*] --> granted
granted --> acquired: acquire
acquired --> [*]
}
allocated --> free: release
API of
ConcurrencyControl
consists of the following functions:
1. Create a resource allocation for a query:
auto slots = ConcurrencyControl::instance().allocate(1, max_threads);
. It will allocate at least 1 and at most
max_threads
slots. Note that the first slot is granted immediately, but the remaining slots may be granted later. Thus limit is soft, because every query will obtain at least one thread.
2. For every thread a slot has to be acquired from an allocation:
while (auto slot = slots->tryAcquire()) spawnThread([slot = std::move(slot)] { ... });
.
3. Update the total amount of slots:
ConcurrencyControl::setMaxConcurrency(concurrent_threads_soft_limit_num)
. Can be done in runtime, w/o server restart.
This API allows queries to start with at least one thread (in presence of CPU pressure) and later scale up to
max_threads
.
Distributed query execution {#distributed-query-execution}
Servers in a cluster setup are mostly independent. You can create a
Distributed
table on one or all servers in a cluster. The
Distributed
table does not store data itself – it only provides a "view" to all local tables on multiple nodes of a cluster. When you SELECT from a
Distributed
table, it rewrites that query, chooses remote nodes according to load balancing settings, and sends the query to them. The
Distributed
table requests remote servers to process a query just up to a stage where intermediate results from different servers can be merged. Then it receives the intermediate results and merges them. The distributed table tries to distribute as much work as possible to remote servers and does not send much intermediate data over the network.
Things become more complicated when you have subqueries in IN or JOIN clauses, and each of them uses a
Distributed
table. We have different strategies for the execution of these queries. | {"source_file": "architecture.md"} | [
0.02542581968009472,
-0.026249494403600693,
-0.023520085960626602,
0.0007684663287363946,
-0.07918582856655121,
0.013023666106164455,
0.03287319466471672,
-0.04321398213505745,
0.039217907935380936,
0.0012517995201051235,
0.0621957890689373,
-0.06476745009422302,
0.029174208641052246,
-0.0... |
c80b2f11-45ce-41f1-8f2b-8c287c0b97dc | Things become more complicated when you have subqueries in IN or JOIN clauses, and each of them uses a
Distributed
table. We have different strategies for the execution of these queries.
There is no global query plan for distributed query execution. Each node has its local query plan for its part of the job. We only have simple one-pass distributed query execution: we send queries for remote nodes and then merge the results. But this is not feasible for complicated queries with high cardinality
GROUP BY
s or with a large amount of temporary data for JOIN. In such cases, we need to "reshuffle" data between servers, which requires additional coordination. ClickHouse does not support that kind of query execution, and we need to work on it.
Merge tree {#merge-tree}
MergeTree
is a family of storage engines that supports indexing by primary key. The primary key can be an arbitrary tuple of columns or expressions. Data in a
MergeTree
table is stored in "parts". Each part stores data in the primary key order, so data is ordered lexicographically by the primary key tuple. All the table columns are stored in separate
column.bin
files in these parts. The files consist of compressed blocks. Each block is usually from 64 KB to 1 MB of uncompressed data, depending on the average value size. The blocks consist of column values placed contiguously one after the other. Column values are in the same order for each column (the primary key defines the order), so when you iterate by many columns, you get values for the corresponding rows.
The primary key itself is "sparse". It does not address every single row, but only some ranges of data. A separate
primary.idx
file has the value of the primary key for each N-th row, where N is called
index_granularity
(usually, N = 8192). Also, for each column, we have
column.mrk
files with "marks", which are offsets to each N-th row in the data file. Each mark is a pair: the offset in the file to the beginning of the compressed block, and the offset in the decompressed block to the beginning of data. Usually, compressed blocks are aligned by marks, and the offset in the decompressed block is zero. Data for
primary.idx
always resides in memory, and data for
column.mrk
files is cached. | {"source_file": "architecture.md"} | [
-0.03972740098834038,
-0.030907463282346725,
0.003755763405933976,
0.047549448907375336,
-0.0474056601524353,
-0.07545928657054901,
-0.025661246851086617,
-0.0028658276423811913,
-0.019522182643413544,
0.019152821972966194,
-0.02710048295557499,
0.027507875114679337,
0.07796651870012283,
-... |
942325c4-79ef-4230-a595-82856c5d740e | When we are going to read something from a part in
MergeTree
, we look at
primary.idx
data and locate ranges that could contain requested data, then look at
column.mrk
data and calculate offsets for where to start reading those ranges. Because of sparseness, excess data may be read. ClickHouse is not suitable for a high load of simple point queries, because the entire range with
index_granularity
rows must be read for each key, and the entire compressed block must be decompressed for each column. We made the index sparse because we must be able to maintain trillions of rows per single server without noticeable memory consumption for the index. Also, because the primary key is sparse, it is not unique: it cannot check the existence of the key in the table at INSERT time. You could have many rows with the same key in a table.
When you
INSERT
a bunch of data into
MergeTree
, that bunch is sorted by primary key order and forms a new part. There are background threads that periodically select some parts and merge them into a single sorted part to keep the number of parts relatively low. That's why it is called
MergeTree
. Of course, merging leads to "write amplification". All parts are immutable: they are only created and deleted, but not modified. When SELECT is executed, it holds a snapshot of the table (a set of parts). After merging, we also keep old parts for some time to make a recovery after failure easier, so if we see that some merged part is probably broken, we can replace it with its source parts.
MergeTree
is not an LSM tree because it does not contain MEMTABLE and LOG: inserted data is written directly to the filesystem. This behavior makes MergeTree much more suitable to insert data in batches. Therefore, frequently inserting small amounts of rows is not ideal for MergeTree. For example, a couple of rows per second is OK, but doing it a thousand times a second is not optimal for MergeTree. However, there is an async insert mode for small inserts to overcome this limitation. We did it this way for simplicity's sake, and because we are already inserting data in batches in our applications
There are MergeTree engines that are doing additional work during background merges. Examples are
CollapsingMergeTree
and
AggregatingMergeTree
. This could be treated as special support for updates. Keep in mind that these are not real updates because users usually have no control over the time when background merges are executed, and data in a
MergeTree
table is almost always stored in more than one part, not in completely merged form.
Replication {#replication}
Replication in ClickHouse can be configured on a per-table basis. You could have some replicated and some non-replicated tables on the same server. You could also have tables replicated in different ways, such as one table with two-factor replication and another with three-factor. | {"source_file": "architecture.md"} | [
-0.0008275684667751193,
-0.039651304483413696,
0.021206961944699287,
0.0024620606563985348,
-0.036102697253227234,
-0.09556841850280762,
0.0006949338130652905,
0.05349066108465195,
0.03867105394601822,
-0.010493497364223003,
-0.007029736880213022,
0.06414633989334106,
-0.005878910887986422,
... |
ce99ea8d-f721-49cc-8859-1cc83a0379a4 | Replication is implemented in the
ReplicatedMergeTree
storage engine. The path in
ZooKeeper
is specified as a parameter for the storage engine. All tables with the same path in
ZooKeeper
become replicas of each other: they synchronize their data and maintain consistency. Replicas can be added and removed dynamically simply by creating or dropping a table.
Replication uses an asynchronous multi-master scheme. You can insert data into any replica that has a session with
ZooKeeper
, and data is replicated to all other replicas asynchronously. Because ClickHouse does not support UPDATEs, replication is conflict-free. As there is no quorum acknowledgment of inserts by default, just-inserted data might be lost if one node fails. The insert quorum can be enabled using
insert_quorum
setting.
Metadata for replication is stored in ZooKeeper. There is a replication log that lists what actions to do. Actions are: get part; merge parts; drop a partition, and so on. Each replica copies the replication log to its queue and then executes the actions from the queue. For example, on insertion, the "get the part" action is created in the log, and every replica downloads that part. Merges are coordinated between replicas to get byte-identical results. All parts are merged in the same way on all replicas. One of the leaders initiates a new merge first and writes "merge parts" actions to the log. Multiple replicas (or all) can be leaders at the same time. A replica can be prevented from becoming a leader using the
merge_tree
setting
replicated_can_become_leader
. The leaders are responsible for scheduling background merges.
Replication is physical: only compressed parts are transferred between nodes, not queries. Merges are processed on each replica independently in most cases to lower the network costs by avoiding network amplification. Large merged parts are sent over the network only in cases of significant replication lag.
Besides, each replica stores its state in ZooKeeper as the set of parts and its checksums. When the state on the local filesystem diverges from the reference state in ZooKeeper, the replica restores its consistency by downloading missing and broken parts from other replicas. When there is some unexpected or broken data in the local filesystem, ClickHouse does not remove it, but moves it to a separate directory and forgets it. | {"source_file": "architecture.md"} | [
-0.053216297179460526,
-0.06276464462280273,
-0.027010401710867882,
0.047931089997291565,
-0.05907193943858147,
-0.023516101762652397,
-0.05718940868973732,
-0.046381209045648575,
0.02815096266567707,
0.07480911165475845,
-0.008086743764579296,
0.008838222362101078,
0.12186727672815323,
-0... |
efe22c35-e561-48d6-aec1-f21e8956d871 | :::note
The ClickHouse cluster consists of independent shards, and each shard consists of replicas. The cluster is
not elastic
, so after adding a new shard, data is not rebalanced between shards automatically. Instead, the cluster load is supposed to be adjusted to be uneven. This implementation gives you more control, and it is ok for relatively small clusters, such as tens of nodes. But for clusters with hundreds of nodes that we are using in production, this approach becomes a significant drawback. We should implement a table engine that spans across the cluster with dynamically replicated regions that could be split and balanced between clusters automatically.
::: | {"source_file": "architecture.md"} | [
0.016746167093515396,
-0.08531152456998825,
0.027327265590429306,
0.006123331841081381,
-0.0022043767385184765,
-0.055568959563970566,
-0.04398658499121666,
-0.0713748037815094,
-0.017923295497894287,
0.007539297454059124,
-0.008731057867407799,
-0.024709651246666908,
0.04312639683485031,
... |
8295cb89-8b7f-44db-8ebf-b44cf6eca35a | description: 'Guide for building ClickHouse from source on macOS systems'
sidebar_label: 'Build on macOS for macOS'
sidebar_position: 15
slug: /development/build-osx
title: 'Build on macOS for macOS'
keywords: ['MacOS', 'Mac', 'build']
doc_type: 'guide'
How to Build ClickHouse on macOS for macOS
:::info You don't need to build ClickHouse yourself!
You can install pre-built ClickHouse as described in
Quick Start
.
:::
ClickHouse can be compiled on macOS x86_64 (Intel) and arm64 (Apple Silicon) using on macOS 10.15 (Catalina) or higher.
As compiler, only Clang from homebrew is supported.
Install prerequisites {#install-prerequisites}
First, see the generic
prerequisites documentation
.
Next, install
Homebrew
and run
Then run:
bash
brew update
brew install ccache cmake ninja libtool gettext llvm lld binutils grep findutils nasm bash rust rustup
:::note
Apple uses a case-insensitive file system by default. While this usually does not affect compilation (especially scratch makes will work), it can confuse file operations like
git mv
.
For serious development on macOS, make sure that the source code is stored on a case-sensitive disk volume, e.g. see
these instructions
.
:::
Build ClickHouse {#build-clickhouse}
To build you must use Homebrew's Clang compiler:
```bash
cd ClickHouse
mkdir build
export PATH=$(brew --prefix llvm)/bin:$PATH
cmake -S . -B build
cmake --build build
The resulting binary will be created at: build/programs/clickhouse
```
:::note
If you are running into
ld: archive member '/' not a mach-o file in ...
errors during linking, you may need
to use llvm-ar by setting flag
-DCMAKE_AR=/opt/homebrew/opt/llvm/bin/llvm-ar
.
:::
Caveats {#caveats}
If you intend to run
clickhouse-server
, make sure to increase the system's
maxfiles
variable.
:::note
You'll need to use sudo.
:::
To do so, create the
/Library/LaunchDaemons/limit.maxfiles.plist
file with the following content:
```xml
Label
limit.maxfiles
ProgramArguments
launchctl
limit
maxfiles
524288
524288
RunAtLoad
ServiceIPC
```
Give the file correct permissions:
bash
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
Validate that the file is correct:
bash
plutil /Library/LaunchDaemons/limit.maxfiles.plist
Load the file (or reboot):
bash
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
To check if it's working, use the
ulimit -n
or
launchctl limit maxfiles
commands. | {"source_file": "build-osx.md"} | [
-0.005780330393463373,
-0.044207680970430374,
-0.0032008341513574123,
-0.0457887277007103,
-0.016254588961601257,
-0.028721537441015244,
-0.03257780522108078,
-0.005494487937539816,
-0.09259527176618576,
-0.03627776354551315,
0.05461994186043739,
-0.08778338134288788,
-0.04217088967561722,
... |
b232e5fc-6f52-47bd-90fe-cfd65f62621f | description: 'Guide for integrating Rust libraries into ClickHouse'
sidebar_label: 'Rust Libraries'
slug: /development/integrating_rust_libraries
title: 'Integrating Rust Libraries'
doc_type: 'guide'
Rust Libraries
Rust library integration will be described based on BLAKE3 hash-function integration.
The first step of integration is to add the library to /rust folder. To do this, you need to create an empty Rust project and include the required library in Cargo.toml. It is also necessary to configure new library compilation as static by adding
crate-type = ["staticlib"]
to Cargo.toml.
Next, you need to link the library to CMake using Corrosion library. The first step is to add the library folder in the CMakeLists.txt inside the /rust folder. After that, you should add the CMakeLists.txt file to the library directory. In it, you need to call the Corrosion import function. These lines were used to import BLAKE3:
```CMake
corrosion_import_crate(MANIFEST_PATH Cargo.toml NO_STD)
target_include_directories(_ch_rust_blake3 INTERFACE include)
add_library(ch_rust::blake3 ALIAS _ch_rust_blake3)
```
Thus, we will create a correct CMake target using Corrosion, and then rename it with a more convenient name. Note that the name
_ch_rust_blake3
comes from Cargo.toml, where it is used as project name (
name = "_ch_rust_blake3"
).
Since Rust data types are not compatible with C/C++ data types, we will use our empty library project to create shim methods for conversion of data received from C/C++, calling library methods, and inverse conversion for output data. For example, this method was written for BLAKE3:
```rust
[no_mangle]
pub unsafe extern "C" fn blake3_apply_shim(
begin:
const c_char,
_size: u32,
out_char_data:
mut u8,
rust
[no_mangle]
pub unsafe extern "C" fn blake3_apply_shim(
begin:
const c_char,
_size: u32,
out_char_data:
mut u8,
) -> *mut c_char {
if begin.is_null() {
let err_str = CString::new("input was a null pointer").unwrap();
return err_str.into_raw();
}
let mut hasher = blake3::Hasher::new();
let input_bytes = CStr::from_ptr(begin);
let input_res = input_bytes.to_bytes();
hasher.update(input_res);
let mut reader = hasher.finalize_xof();
reader.fill(std::slice::from_raw_parts_mut(out_char_data, blake3::OUT_LEN));
std::ptr::null_mut()
}
```
This method gets C-compatible string, its size and output string pointer as input. Then, it converts C-compatible inputs into types that are used by actual library methods and calls them. After that, it should convert library methods' outputs back into C-compatible type. In that particular case library supported direct writing into pointer by method fill(), so the conversion was not needed. The main advice here is to create less methods, so you will need to do less conversions on each method call and won't create much overhead. | {"source_file": "integrating_rust_libraries.md"} | [
-0.08355041593313217,
-0.04056280851364136,
-0.010201755911111832,
-0.003224247368052602,
0.02947983518242836,
0.03587108850479126,
-0.007034958340227604,
0.03410205990076065,
-0.060231778770685196,
-0.13558709621429443,
-0.038087356835603714,
-0.06074263155460358,
0.07646683603525162,
0.0... |
e14c3984-1131-4994-b5b4-b9e537653616 | It is worth noting that the
#[no_mangle]
attribute and
extern "C"
are mandatory for all such methods. Without them, it will not be possible to perform a correct C/C++-compatible compilation. Moreover, they are necessary for the next step of the integration.
After writing the code for the shim methods, we need to prepare the header file for the library. This can be done manually, or you can use the cbindgen library for auto-generation. In case of using cbindgen, you will need to write a build.rs build script and include cbindgen as a build-dependency.
An example of a build script that can auto-generate a header file:
```rust
let crate_dir = env::var("CARGO_MANIFEST_DIR").unwrap();
let package_name = env::var("CARGO_PKG_NAME").unwrap();
let output_file = ("include/".to_owned() + &format!("{}.h", package_name)).to_string();
match cbindgen::generate(&crate_dir) {
Ok(header) => {
header.write_to_file(&output_file);
}
Err(err) => {
panic!("{}", err)
}
}
```
Also, you should use attribute #[no_mangle] and
extern "C"
for every C-compatible attribute. Without it library can compile incorrectly and cbindgen won't launch header autogeneration.
After all these steps you can test your library in a small project to find all problems with compatibility or header generation. If any problems occur during header generation, you can try to configure it with cbindgen.toml file (you can find a template here:
https://github.com/eqrion/cbindgen/blob/master/template.toml
).
It is worth noting the problem that occurred when integrating BLAKE3:
MemorySanitizer can cause false-positive reports as it's unable to see if some variables in Rust are initialized or not. It was solved with writing a method with more explicit definition for some variables, although this implementation of method is slower and is used only to fix MemorySanitizer builds. | {"source_file": "integrating_rust_libraries.md"} | [
-0.0821385383605957,
0.0466410256922245,
-0.05443373695015907,
0.07014329731464386,
-0.03489992395043373,
0.04109085723757744,
-0.010915436781942844,
0.059990718960762024,
-0.09093385934829712,
-0.11979666352272034,
-0.014661720022559166,
-0.11420255899429321,
0.06309682875871658,
0.012247... |
65718363-e0d7-4f05-b6c2-2c6f31f4ab7f | description: 'Guide to testing ClickHouse and running the test suite'
sidebar_label: 'Testing'
sidebar_position: 40
slug: /development/tests
title: 'Testing ClickHouse'
doc_type: 'guide'
Testing ClickHouse
Functional tests {#functional-tests}
Functional tests are the most simple and convenient to use.
Most of ClickHouse features can be tested with functional tests and they are mandatory to use for every change in ClickHouse code that can be tested that way.
Each functional test sends one or multiple queries to the running ClickHouse server and compares the result with reference.
Tests are located in
./tests/queries
directory.
Each test can be one of two types:
.sql
and
.sh
.
- An
.sql
test is the simple SQL script that is piped to
clickhouse-client
.
- An
.sh
test is a script that is run by itself.
SQL tests are generally preferable to
.sh
tests.
You should use
.sh
tests only when you have to test some feature that cannot be exercised from pure SQL, such as piping some input data into
clickhouse-client
or testing
clickhouse-local
.
:::note
A common mistake when testing data types
DateTime
and
DateTime64
is assuming that the server uses a specific time zone (e.g. "UTC"). This is not the case, time zones in CI test runs
are deliberately randomized. The easiest workaround is to specify the time zone for test values explicitly, e.g.
toDateTime64(val, 3, 'Europe/Amsterdam')
.
:::
Running a test locally {#running-a-test-locally}
Start the ClickHouse server locally, listening on the default port (9000).
To run, for example, the test
01428_hash_set_nan_key
, change to the repository folder and run the following command:
sh
PATH=<path to clickhouse-client>:$PATH tests/clickhouse-test 01428_hash_set_nan_key
Test results (
stderr
and
stdout
) are written to files
01428_hash_set_nan_key.[stderr|stdout]
which are located next the test itself (for
queries/0_stateless/foo.sql
, the output will be in
queries/0_stateless/foo.stdout
).
See
tests/clickhouse-test --help
for all options of
clickhouse-test
.
You can run all tests or run subset of tests by providing a filter for test names:
./clickhouse-test substring
.
There are also options to run tests in parallel or in random order.
Adding a new test {#adding-a-new-test}
To add new test, first create a
.sql
or
.sh
file in
queries/0_stateless
directory.
Then generate the corresponding
.reference
file using
clickhouse-client < 12345_test.sql > 12345_test.reference
or
./12345_test.sh > ./12345_test.reference
.
Tests should only create, drop, select from, etc. tables in database
test
which is automatically created beforehand.
It is okay to use temporary tables.
To set up the same environment as in CI locally, install the test configurations (they will use a Zookeeper mock implementation and adjust some settings)
sh
cd <repository>/tests/config
sudo ./install.sh | {"source_file": "tests.md"} | [
0.039503905922174454,
-0.05168210715055466,
-0.008038836531341076,
0.023523176088929176,
-0.028550520539283752,
-0.0636705681681633,
-0.030251018702983856,
-0.015696365386247635,
-0.10548165440559387,
-0.00043435103725641966,
0.05546041205525398,
0.0035046294797211885,
0.009567741304636002,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.