id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
72c6db24-b8a2-4f83-ad36-c44cdb4f745a | The following query demonstrates this by using the
sum() OVER
clause, which creates a running total. The
bar()
function provides a visual representation of the growth.
sql
SELECT
toDate(time) AS day,
sum(hits) AS h,
sum(h) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND 0 FOLLOWING) AS c,
bar(c, 0, 50000, 25) AS b
FROM wikistat
WHERE path = '"Weird_Al"_Yankovic'
GROUP BY ALL
ORDER BY day
LIMIT 10;
text
βββββββββdayββ¬ββββhββ¬βββββcββ¬βbββββββββββββββββββ
β 2015-05-01 β 3934 β 3934 β ββ β
β 2015-05-02 β 3411 β 7345 β ββββ β
β 2015-05-03 β 3195 β 10540 β ββββββ β
β 2015-05-04 β 3076 β 13616 β βββββββ β
β 2015-05-05 β 3450 β 17066 β βββββββββ β
β 2015-05-06 β 3053 β 20119 β ββββββββββ β
β 2015-05-07 β 2890 β 23009 β ββββββββββββ β
β 2015-05-08 β 3898 β 26907 β ββββββββββββββ β
β 2015-05-09 β 3092 β 29999 β βββββββββββββββ β
β 2015-05-10 β 3508 β 33507 β βββββββββββββββββ β
ββββββββββββββ΄βββββββ΄ββββββββ΄ββββββββββββββββββββ
Rate calculations {#time-series-rate-calculations}
When analyzing time series data, it's often useful to understand the rate of events per unit of time.
This query calculates the rate of page views per second by dividing hourly totals by the number of seconds in an hour (3600).
The visual bar helps identify peak hours of activity.
sql
SELECT
toStartOfHour(time) AS time,
sum(hits) AS hits,
round(hits / (60 * 60), 2) AS rate,
bar(rate * 10, 0, max(rate * 10) OVER (), 25) AS b
FROM wikistat
WHERE path = '"Weird_Al"_Yankovic'
GROUP BY time
LIMIT 10;
text
βββββββββββββββββtimeββ¬βββhββ¬βrateββ¬βbββββββ
β 2015-07-01 01:00:00 β 143 β 0.04 β ββ β
β 2015-07-01 02:00:00 β 170 β 0.05 β βββ β
β 2015-07-01 03:00:00 β 148 β 0.04 β ββ β
β 2015-07-01 04:00:00 β 190 β 0.05 β βββ β
β 2015-07-01 05:00:00 β 253 β 0.07 β ββββ β
β 2015-07-01 06:00:00 β 233 β 0.06 β βββ β
β 2015-07-01 07:00:00 β 359 β 0.1 β βββββ β
β 2015-07-01 08:00:00 β 190 β 0.05 β βββ β
β 2015-07-01 09:00:00 β 121 β 0.03 β ββ β
β 2015-07-01 10:00:00 β 70 β 0.02 β β β
βββββββββββββββββββββββ΄ββββββ΄βββββββ΄ββββββββ
Histograms {#time-series-histograms}
A popular use case for time series data is to build histograms based on tracked events.
Suppose we wanted to understand the distribution of a number of pages based on their total hits, only including pages that have over 10,000 hits.
We can use the
histogram()
function to automatically generate an adaptive histogram based on the number of bins:
sql
SELECT
histogram(10)(hits) AS hist
FROM
(
SELECT
path,
sum(hits) AS hits
FROM wikistat
WHERE date(time) = '2015-06-15'
GROUP BY path
HAVING hits > 10000
)
FORMAT Vertical; | {"source_file": "03_analysis-functions.md"} | [
0.004205509088933468,
-0.041067808866500854,
0.047647345811128616,
0.07087402790784836,
-0.057564493268728256,
-0.010283419862389565,
-0.016055330634117126,
-0.023965954780578613,
0.015174595639109612,
0.024039437994360924,
0.03806471824645996,
-0.07193561643362045,
0.0014003037940710783,
... |
3de60598-ab96-41c6-8cfd-a5fa07466497 | text
Row 1:
ββββββ
hist: [(10033,23224.55065359477,60.625),(23224.55065359477,37855.38888888889,15.625),(37855.38888888889,52913.5,3.5),(52913.5,69438,1.25),(69438,83102.16666666666,1.25),(83102.16666666666,94267.66666666666,2.5),(94267.66666666666,116778,1.25),(116778,186175.75,1.125),(186175.75,946963.25,1.75),(946963.25,1655250,1.125)]
We can then use
arrayJoin()
to massage the data and
bar()
to visualize it:
sql
WITH histogram(10)(hits) AS hist
SELECT
round(arrayJoin(hist).1) AS lowerBound,
round(arrayJoin(hist).2) AS upperBound,
arrayJoin(hist).3 AS count,
bar(count, 0, max(count) OVER (), 20) AS b
FROM
(
SELECT
path,
sum(hits) AS hits
FROM wikistat
WHERE date(time) = '2015-06-15'
GROUP BY path
HAVING hits > 10000
);
text
ββlowerBoundββ¬βupperBoundββ¬ββcountββ¬βbβββββββββββββββββββββ
β 10033 β 19886 β 53.375 β ββββββββββββββββββββ β
β 19886 β 31515 β 18.625 β βββββββ β
β 31515 β 43518 β 6.375 β βββ β
β 43518 β 55647 β 1.625 β β β
β 55647 β 73602 β 1.375 β β β
β 73602 β 92880 β 3.25 β ββ β
β 92880 β 116778 β 1.375 β β β
β 116778 β 186176 β 1.125 β β β
β 186176 β 946963 β 1.75 β β β
β 946963 β 1655250 β 1.125 β β β
ββββββββββββββ΄βββββββββββββ΄βββββββββ΄βββββββββββββββββββββββ | {"source_file": "03_analysis-functions.md"} | [
0.003032175125554204,
0.05371318385004997,
-0.04698898643255234,
-0.0376894474029541,
-0.09797801077365875,
-0.06049707159399986,
0.08904550969600677,
-0.003176785307005048,
-0.04643457755446434,
-0.02393268048763275,
-0.049469999969005585,
0.021960526704788208,
0.06583397090435028,
-0.015... |
fae795f0-44e7-4d04-a6bb-9bf0fab3541f | description: 'Index page for the time-series use-case guide.'
slug: /use-cases/time-series
title: 'Time-Series'
pagination_prev: null
pagination_next: null
keywords: ['time-series', 'time-based data', 'metrics', 'sensor data', 'temporal analysis', 'trend analysis']
doc_type: 'guide'
Welcome to our time-series use case guide. In this guide you'll learn how you can set up and use ClickHouse for time-series data.
Time-series data is everywhere in modern analytics. From system metrics and application logs to business events and sensor readings, data points collected over time help us understand trends, patterns, and anomalies in our systems and processes.
ClickHouse excels at handling time-series data, offering powerful features for both storage and analysis. Whether you're building a simple monitoring dashboard or processing petabytes of sensor data in real-time, ClickHouse provides the tools and performance you need.
This guide walks you through everything you need to know about working with time-series data in ClickHouse, from basic concepts to advanced optimization techniques. You'll learn how to:
Choose the right date and time data types for your use case
Perform common time-series operations and aggregations
Apply specialized analysis functions for time-based data
Optimize storage efficiency for temporal data
Tune query performance for time-series workloads
Whether you're new to time-series analysis or looking to optimize an existing implementation, this guide will help you make the most of ClickHouse's time-series capabilities. | {"source_file": "index.md"} | [
-0.06481564044952393,
-0.025660360231995583,
-0.0626431405544281,
-0.015755200758576393,
-0.014811217784881592,
-0.05480880290269852,
-0.017096512019634247,
0.005917670670896769,
-0.03695215657353401,
-0.021113891154527664,
0.009500156156718731,
0.03142759948968887,
0.0014585207682102919,
... |
03f92074-c930-461c-a6e8-dce3b3d73f95 | title: 'Basic operations - Time-series'
sidebar_label: 'Basic operations'
description: 'Basic time-series operations in ClickHouse.'
slug: /use-cases/time-series/basic-operations
keywords: ['time-series', 'basic operations', 'data ingestion', 'querying', 'filtering', 'grouping', 'aggregation']
show_related_blogs: true
doc_type: 'guide'
Basic time-series operations
ClickHouse provides several methods for working with time series data, allowing you to aggregate, group, and analyze data points across different time periods.
This section covers the fundamental operations commonly used when working with time-based data.
Common operations include grouping data by time intervals, handling gaps in time series data, and calculating changes between time periods.
These operations can be performed using standard SQL syntax combined with ClickHouse's built-in time functions.
We're going to explore ClickHouse time-series querying capabilities with the Wikistat (Wikipedia pageviews data) dataset:
sql
CREATE TABLE wikistat
(
`time` DateTime,
`project` String,
`subproject` String,
`path` String,
`hits` UInt64
)
ENGINE = MergeTree
ORDER BY (time);
Let's populate this table with 1 billion records:
sql
INSERT INTO wikistat
SELECT *
FROM s3('https://ClickHouse-public-datasets.s3.amazonaws.com/wikistat/partitioned/wikistat*.native.zst')
LIMIT 1e9;
Aggregating by time bucket {#time-series-aggregating-time-bucket}
The most popular requirement is to aggregate data based on periods, e.g. get the total amount of hits for each day:
sql
SELECT
toDate(time) AS date,
sum(hits) AS hits
FROM wikistat
GROUP BY ALL
ORDER BY date ASC
LIMIT 5;
text
ββββββββdateββ¬βββββhitsββ
β 2015-05-01 β 25524369 β
β 2015-05-02 β 25608105 β
β 2015-05-03 β 28567101 β
β 2015-05-04 β 29229944 β
β 2015-05-05 β 29383573 β
ββββββββββββββ΄βββββββββββ
We've used the
toDate()
function here, which converts the specified time to a date type. Alternatively, we can batch by an hour and filter on the specific date:
sql
SELECT
toStartOfHour(time) AS hour,
sum(hits) AS hits
FROM wikistat
WHERE date(time) = '2015-07-01'
GROUP BY ALL
ORDER BY hour ASC
LIMIT 5;
text
βββββββββββββββββhourββ¬βββhitsββ
β 2015-07-01 00:00:00 β 656676 β
β 2015-07-01 01:00:00 β 768837 β
β 2015-07-01 02:00:00 β 862311 β
β 2015-07-01 03:00:00 β 829261 β
β 2015-07-01 04:00:00 β 749365 β
βββββββββββββββββββββββ΄βββββββββ
The
toStartOfHour()
function used here converts the given time to the start of the hour.
You can also group by year, quarter, month, or day.
Custom grouping intervals {#time-series-custom-grouping-intervals}
We can even group by arbitrary intervals, e.g., 5 minutes using the
toStartOfInterval()
function.
Let's say we want to group by 4-hour intervals.
We can specify the grouping interval using the
INTERVAL
clause: | {"source_file": "02_basic-operations.md"} | [
-0.03618517890572548,
-0.045176729559898376,
-0.02873271517455578,
0.021372176706790924,
-0.07627961784601212,
-0.07885605096817017,
0.02695896290242672,
-0.012469214387238026,
-0.0305472519248724,
0.02565131150186062,
0.030409136787056923,
0.008423329330980778,
0.002444969955831766,
0.007... |
19ae6ed4-0a75-4569-8905-07ed203d4e37 | Let's say we want to group by 4-hour intervals.
We can specify the grouping interval using the
INTERVAL
clause:
sql
SELECT
toStartOfInterval(time, INTERVAL 4 HOUR) AS interval,
sum(hits) AS hits
FROM wikistat
WHERE date(time) = '2015-07-01'
GROUP BY ALL
ORDER BY interval ASC
LIMIT 6;
Or we can use the
toIntervalHour()
function
sql
SELECT
toStartOfInterval(time, toIntervalHour(4)) AS interval,
sum(hits) AS hits
FROM wikistat
WHERE date(time) = '2015-07-01'
GROUP BY ALL
ORDER BY interval ASC
LIMIT 6;
Either way, we get the following results:
text
βββββββββββββintervalββ¬ββββhitsββ
β 2015-07-01 00:00:00 β 3117085 β
β 2015-07-01 04:00:00 β 2928396 β
β 2015-07-01 08:00:00 β 2679775 β
β 2015-07-01 12:00:00 β 2461324 β
β 2015-07-01 16:00:00 β 2823199 β
β 2015-07-01 20:00:00 β 2984758 β
βββββββββββββββββββββββ΄ββββββββββ
Filling empty groups {#time-series-filling-empty-groups}
In a lot of cases we deal with sparse data with some absent intervals. This results in empty buckets. Let's take the following example where we group data by 1-hour intervals. This will output the following stats with some hours missing values:
sql
SELECT
toStartOfHour(time) AS hour,
sum(hits)
FROM wikistat
WHERE (project = 'ast') AND (subproject = 'm') AND (date(time) = '2015-07-01')
GROUP BY ALL
ORDER BY hour ASC;
text
βββββββββββββββββhourββ¬βsum(hits)ββ
β 2015-07-01 00:00:00 β 3 β <- missing values
β 2015-07-01 02:00:00 β 1 β <- missing values
β 2015-07-01 04:00:00 β 1 β
β 2015-07-01 05:00:00 β 2 β
β 2015-07-01 06:00:00 β 1 β
β 2015-07-01 07:00:00 β 1 β
β 2015-07-01 08:00:00 β 3 β
β 2015-07-01 09:00:00 β 2 β <- missing values
β 2015-07-01 12:00:00 β 2 β
β 2015-07-01 13:00:00 β 4 β
β 2015-07-01 14:00:00 β 2 β
β 2015-07-01 15:00:00 β 2 β
β 2015-07-01 16:00:00 β 2 β
β 2015-07-01 17:00:00 β 1 β
β 2015-07-01 18:00:00 β 5 β
β 2015-07-01 19:00:00 β 5 β
β 2015-07-01 20:00:00 β 4 β
β 2015-07-01 21:00:00 β 4 β
β 2015-07-01 22:00:00 β 2 β
β 2015-07-01 23:00:00 β 2 β
βββββββββββββββββββββββ΄ββββββββββββ
ClickHouse provides the
WITH FILL
modifier to address this. This will fill out all the empty hours with zeros, so we can better understand the distribution over time:
sql
SELECT
toStartOfHour(time) AS hour,
sum(hits)
FROM wikistat
WHERE (project = 'ast') AND (subproject = 'm') AND (date(time) = '2015-07-01')
GROUP BY ALL
ORDER BY hour ASC WITH FILL STEP toIntervalHour(1); | {"source_file": "02_basic-operations.md"} | [
-0.004835045430809259,
-0.007175194099545479,
0.06039305403828621,
0.09045860916376114,
-0.05260220170021057,
0.05590911954641342,
0.022245515137910843,
0.0007985866977833211,
0.020433923229575157,
-0.0159253291785717,
0.014803636819124222,
-0.05292211472988129,
0.018703611567616463,
-0.00... |
f8f83b75-db48-4a7b-a0a6-150f2266b8c1 | text
βββββββββββββββββhourββ¬βsum(hits)ββ
β 2015-07-01 00:00:00 β 3 β
β 2015-07-01 01:00:00 β 0 β <- new value
β 2015-07-01 02:00:00 β 1 β
β 2015-07-01 03:00:00 β 0 β <- new value
β 2015-07-01 04:00:00 β 1 β
β 2015-07-01 05:00:00 β 2 β
β 2015-07-01 06:00:00 β 1 β
β 2015-07-01 07:00:00 β 1 β
β 2015-07-01 08:00:00 β 3 β
β 2015-07-01 09:00:00 β 2 β
β 2015-07-01 10:00:00 β 0 β <- new value
β 2015-07-01 11:00:00 β 0 β <- new value
β 2015-07-01 12:00:00 β 2 β
β 2015-07-01 13:00:00 β 4 β
β 2015-07-01 14:00:00 β 2 β
β 2015-07-01 15:00:00 β 2 β
β 2015-07-01 16:00:00 β 2 β
β 2015-07-01 17:00:00 β 1 β
β 2015-07-01 18:00:00 β 5 β
β 2015-07-01 19:00:00 β 5 β
β 2015-07-01 20:00:00 β 4 β
β 2015-07-01 21:00:00 β 4 β
β 2015-07-01 22:00:00 β 2 β
β 2015-07-01 23:00:00 β 2 β
βββββββββββββββββββββββ΄ββββββββββββ
Rolling time windows {#time-series-rolling-time-windows}
Sometimes, we don't want to deal with the start of intervals (like the start of a day or an hour) but window intervals.
Let's say we want to understand the total hits for a window, not based on days but on a 24-hour period offset from 6 pm.
We can use the
date_diff()
function to calculate the difference between a reference time and each record's time.
In this case, the
day
column will represent the difference in days (e.g., 1 day ago, 2 days ago, etc.):
sql
SELECT
dateDiff('day', toDateTime('2015-05-01 18:00:00'), time) AS day,
sum(hits),
FROM wikistat
GROUP BY ALL
ORDER BY day ASC
LIMIT 5;
text
ββdayββ¬βsum(hits)ββ
β 0 β 25524369 β
β 1 β 25608105 β
β 2 β 28567101 β
β 3 β 29229944 β
β 4 β 29383573 β
βββββββ΄ββββββββββββ | {"source_file": "02_basic-operations.md"} | [
0.0014154870295897126,
0.03224014863371849,
-0.03257527947425842,
0.06483108550310135,
-0.014514831826090813,
0.008251598104834557,
0.02185947075486183,
-0.012836359441280365,
0.0222704466432333,
0.01911786198616028,
0.033559057861566544,
-0.04327245056629181,
-0.008133269846439362,
-0.020... |
45fb4d64-88cf-48b0-ad65-d7373406aa9a | description: 'Landing page for Machine Learning and GenAI use case guides'
pagination_prev: null
pagination_next: null
slug: /use-cases/AI/ask-ai
title: 'Machine learning and GenAI'
keywords: ['machine learning', 'genAI', 'AI']
doc_type: 'landing-page'
Machine Learning and GenAI
ClickHouse is ideally suited as a real-time database to power Machine Learning workloads.
With ClickHouse, it's easier than ever to unleash GenAI on your analytics data.
In this section, you'll find some guides around how ClickHouse is used for
Machine Learning and GenAI.
| Section | Description |
|----------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|
|
AI chat
| This guide explains how to enable and use the AI Chat feature in the ClickHouse Cloud Console. |
|
MCP
| A collection of guides to get you setup using Model Context Protocol (MCP) with ClickHouse |
|
AI-powered SQL generation
| This feature allows users to describe their data requirements in plain text, which the system then translates into corresponding SQL statements. | | {"source_file": "index.md"} | [
-0.04397366940975189,
-0.023967450484633446,
-0.0681898221373558,
0.04313334822654724,
0.001041680690832436,
-0.007226176559925079,
-0.020013587549328804,
-0.04370418190956116,
-0.05226054787635803,
-0.05073847621679306,
0.02526632882654667,
-0.001997357001528144,
0.05228368937969208,
-0.0... |
d5211c83-1705-4105-a892-43944d4e60b4 | slug: /use-cases/AI/ai-powered-sql-generation
sidebar_label: 'AI-powered SQL generation'
title: 'AI-powered SQL generation'
pagination_prev: null
pagination_next: null
description: 'This guide explains how to use AI to generate SQL queries in ClickHouse Client or clickhouse-local.'
keywords: ['AI', 'SQL generation']
show_related_blogs: true
doc_type: 'guide'
Starting from ClickHouse 25.7,
ClickHouse Client
and
clickhouse-local
include
AI-powered functionality
that converts natural language descriptions into SQL queries. This feature allows users to describe their data requirements in plain text, which the system then translates into corresponding SQL statements.
This capability is particularly useful for users who may not be familiar with complex SQL syntax or need to quickly generate queries for exploratory data analysis. The feature works with standard ClickHouse tables and supports common query patterns including filtering, aggregation, and joins.
It does this with help from the following in-built tools/functions:
list_databases
- List all available databases in the ClickHouse instance
list_tables_in_database
- List all tables in a specific database
get_schema_for_table
- Get the
CREATE TABLE
statement (schema) for a specific table
Prerequisites {#prerequisites}
We'll need to add an Anthropic or OpenAI key as an environment variable:
bash
export ANTHROPIC_API_KEY=your_api_key
export OPENAI_API_KEY=your_api_key
Alternatively, you can
provide a configuration file
.
Connecting to the ClickHouse SQL playground {#connecting-to-the-clickhouse-sql-playground}
We're going to explore this feature using the
ClickHouse SQL playground
.
We can connect to the ClickHouse SQL playground using the following command:
bash
clickhouse client -mn \
--host sql-clickhouse.clickhouse.com \
--secure \
--user demo --password ''
:::note
We'll assume you have ClickHouse installed, but if not, refer to the
installation guide
:::
Asking ClickHouse questions in natural language {#asking-clickhouse-questions-in-natural-language}
Now it's time to start asking some questions!
The text to SQL feature is effectively a one-shot query generation tool. Since it doesn't maintain conversation history, include as much useful context as possible in your question. Be specific about:
Time periods or date ranges
The type of analysis you want (averages, totals, rankings, etc.)
Any filtering criteria
Finding expensive housing markets {#finding-expensive-housing-markets}
Let's start by asking a question about house prices. The SQL playground contains a UK house prices dataset, which the AI will automatically discover:
sql
?? Can you tell me the most expensive place to buy a house in 2021?;
Once we press enter, we'll see the thought process of the AI as it tries to answer our question. | {"source_file": "ai-powered-sql-generation.md"} | [
0.027752229943871498,
-0.0212990902364254,
-0.02845640294253826,
0.07586854696273804,
-0.03537847846746445,
-0.025851594284176826,
0.04731506481766701,
-0.02459716610610485,
-0.07577690482139587,
-0.027796532958745956,
0.030101686716079712,
-0.03699558228254318,
0.1095522791147232,
-0.0334... |
00aea6ed-8856-4cd4-8bb7-51e302feb2b6 | sql
?? Can you tell me the most expensive place to buy a house in 2021?;
Once we press enter, we'll see the thought process of the AI as it tries to answer our question.
text
β’ Starting AI SQL generation with schema discovery...
βββββββββββββββββββββββββββββββββββββββββββββββββ
π§ thinking...[INFO] Text generation successful - model: claude-3-5-sonnet-latest, response_id: msg_01QpQd35ZD8HM9QSYFZuQ8Wh
π§ Calling: list_databases [toolu_01...]
π§ thinkingβ list_databases completed
ββ Found 37 databases: - amazon - bluesky - country - covid - default - dns - en...
π§ Calling: list_databases [toolu_01...]
β list_databases completed
ββ Found 37 databases: - amazon - bluesky - country - covid - default - dns - en...
π§ thinking[INFO] Text generation successful - model: claude-3-5-sonnet-latest, response_id: msg_016DxK8SMVKGrUPfFwi43AzE
π§ Calling: list_tables_in_database [toolu_01...]
ββ Args: {"database":"uk"}
π§ thinking.β list_tables_in_database completed
ββ Found 9 tables in database 'uk': - uk_codes - uk_postcode_to_iso - uk_price_p...
π§ Calling: list_tables_in_database [toolu_01...]
ββ Args: {"database":"uk"}
β list_tables_in_database completed
ββ Found 9 tables in database 'uk': - uk_codes - uk_postcode_to_iso - uk_price_p...
π§ thinking.[INFO] Text generation successful - model: claude-3-5-sonnet-latest, response_id: msg_015giF12wm619beNGC5aTrmw
π§ Calling: get_schema_for_table [toolu_01...]
ββ Args: {"database":"uk","table":"uk_price_paid"}
β get_schema_for_table completed
ββ Schema for uk.uk_price_paid: CREATE TABLE uk.uk_price_paid ( `price` UInt...
π§ Calling: get_schema_for_table [toolu_01...]
ββ Args: {"database":"uk","table":"uk_price_paid"}
π§ thinking..β get_schema_for_table completed
ββ Schema for uk.uk_price_paid: CREATE TABLE uk.uk_price_paid ( `price` UInt...
π§ thinking[INFO] Text generation successful - model: claude-3-5-sonnet-latest, response_id: msg_01HxT1HKbaTT3165Wx5bDtY9
βββββββββββββββββββββββββββββββββββββββββββββββββ
β’ β¨ SQL query generated successfully!
:) SELECT town, district, county, round(avg(price), 2) as avg_price, count() as total_sales FROM uk.uk_price_paid WHERE date >= '2021-01-01' AND date <= '2021-12-31' GROUP BY town, district, county HAVING total_sales >= 10 ORDER BY avg_price DESC LIMIT 10
The AI follows these steps:
Schema discovery - Explores available databases and tables
Table analysis - Examines the structure of relevant tables
Query generation - Creates SQL based on your question and the discovered schema
We can see that it did find the
uk_price_paid
table and generated a query for us to run.
If we run that query, we'll see the following output: | {"source_file": "ai-powered-sql-generation.md"} | [
0.03891602158546448,
-0.06466800719499588,
-0.057770516723394394,
0.03452775254845619,
0.017255164682865143,
-0.04207487404346466,
0.06403568387031555,
0.007840607315301895,
-0.028664646670222282,
0.08671367168426514,
0.02817820943892002,
-0.0712851881980896,
0.10008470714092255,
-0.038529... |
5bad8487-962e-46ad-876a-22712197d74c | We can see that it did find the
uk_price_paid
table and generated a query for us to run.
If we run that query, we'll see the following output:
text
ββtownββββββββββββ¬βdistrictββββββββββββββββ¬βcountyβββββββββββ¬ββavg_priceββ¬βtotal_salesββ
β ILKLEY β HARROGATE β NORTH YORKSHIRE β 4310200 β 10 β
β LONDON β CITY OF LONDON β GREATER LONDON β 4008117.32 β 311 β
β LONDON β CITY OF WESTMINSTER β GREATER LONDON β 2847409.81 β 3984 β
β LONDON β KENSINGTON AND CHELSEA β GREATER LONDON β 2331433.1 β 2594 β
β EAST MOLESEY β RICHMOND UPON THAMES β GREATER LONDON β 2244845.83 β 12 β
β LEATHERHEAD β ELMBRIDGE β SURREY β 2051836.42 β 102 β
β VIRGINIA WATER β RUNNYMEDE β SURREY β 1914137.53 β 169 β
β REIGATE β MOLE VALLEY β SURREY β 1715780.89 β 18 β
β BROADWAY β TEWKESBURY β GLOUCESTERSHIRE β 1633421.05 β 19 β
β OXFORD β SOUTH OXFORDSHIRE β OXFORDSHIRE β 1628319.07 β 405 β
ββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββ΄ββββββββββββββ
If we want to ask follow up questions, we need to ask our question from scratch.
Finding expensive properties in Greater London {#finding-expensive-properties-in-greater-london}
Since the feature doesn't maintain conversation history, each query must be self-contained. When asking follow-up questions, you need to provide the full context rather than referring to previous queries.
For example, after seeing the previous results, we might want to focus specifically on Greater London properties. Rather than asking "What about Greater London?", we need to include the complete context:
sql
?? Can you tell me the most expensive place to buy a house in Greater London across the years?;
Notice that the AI goes through the same discovery process, even though it just examined this data: | {"source_file": "ai-powered-sql-generation.md"} | [
0.0632680207490921,
-0.05633161962032318,
0.029642881825566292,
0.036227982491254807,
0.007203924469649792,
-0.031846653670072556,
-0.009131375700235367,
-0.03231241926550865,
-0.09111267328262329,
0.026953427121043205,
-0.004080752842128277,
-0.10798495262861252,
0.00023610488278791308,
-... |
9ae863e2-01e0-4b21-92f8-9e00acf617f4 | Notice that the AI goes through the same discovery process, even though it just examined this data:
text
β’ Starting AI SQL generation with schema discovery...
βββββββββββββββββββββββββββββββββββββββββββββββββ
π§ thinking[INFO] Text generation successful - model: claude-3-5-sonnet-latest, response_id: msg_012m4ayaSHTYtX98gxrDy1rz
π§ Calling: list_databases [toolu_01...]
β list_databases completed
ββ Found 37 databases: - amazon - bluesky - country - covid - default - dns - en...
π§ Calling: list_databases [toolu_01...]
π§ thinking.β list_databases completed
ββ Found 37 databases: - amazon - bluesky - country - covid - default - dns - en...
π§ thinking.[INFO] Text generation successful - model: claude-3-5-sonnet-latest, response_id: msg_01KU4SZRrJckutXUzfJ4NQtA
π§ Calling: list_tables_in_database [toolu_01...]
ββ Args: {"database":"uk"}
π§ thinking..β list_tables_in_database completed
ββ Found 9 tables in database 'uk': - uk_codes - uk_postcode_to_iso - uk_price_p...
π§ Calling: list_tables_in_database [toolu_01...]
ββ Args: {"database":"uk"}
β list_tables_in_database completed
ββ Found 9 tables in database 'uk': - uk_codes - uk_postcode_to_iso - uk_price_p...
π§ thinking[INFO] Text generation successful - model: claude-3-5-sonnet-latest, response_id: msg_01X9CnxoBpbD2xj2UzuRy2is
π§ Calling: get_schema_for_table [toolu_01...]
ββ Args: {"database":"uk","table":"uk_price_paid"}
π§ thinking.β get_schema_for_table completed
ββ Schema for uk.uk_price_paid: CREATE TABLE uk.uk_price_paid ( `price` UInt...
π§ Calling: get_schema_for_table [toolu_01...]
ββ Args: {"database":"uk","table":"uk_price_paid"}
β get_schema_for_table completed
ββ Schema for uk.uk_price_paid: CREATE TABLE uk.uk_price_paid ( `price` UInt...
π§ thinking...[INFO] Text generation successful - model: claude-3-5-sonnet-latest, response_id: msg_01QTMypS1XuhjgVpDir7N9wD
βββββββββββββββββββββββββββββββββββββββββββββββββ
β’ β¨ SQL query generated successfully!
:) SELECT district, toYear(date) AS year, round(avg(price), 2) AS avg_price, count() AS total_sales FROM uk.uk_price_paid WHERE county = 'GREATER LONDON' GROUP BY district, year HAVING total_sales >= 10 ORDER BY avg_price DESC LIMIT 10;
This generates a more targeted query that filters specifically for Greater London and breaks down results by year.
The output of the query is shown below: | {"source_file": "ai-powered-sql-generation.md"} | [
-0.021409617736935616,
-0.06595545262098312,
-0.05123548209667206,
0.024388018995523453,
0.06795473396778107,
-0.0444825179874897,
0.04022767022252083,
-0.027353685349225998,
-0.00580686517059803,
0.06131718307733536,
0.03635542839765549,
-0.06955170631408691,
0.07734581083059311,
-0.03810... |
4dd7891c-3374-4b29-85e4-ff5a9c489703 | This generates a more targeted query that filters specifically for Greater London and breaks down results by year.
The output of the query is shown below:
text
ββdistrictβββββββββββββ¬βyearββ¬βββavg_priceββ¬βtotal_salesββ
β CITY OF LONDON β 2019 β 14504772.73 β 299 β
β CITY OF LONDON β 2017 β 6351366.11 β 367 β
β CITY OF LONDON β 2016 β 5596348.25 β 243 β
β CITY OF LONDON β 2023 β 5576333.72 β 252 β
β CITY OF LONDON β 2018 β 4905094.54 β 523 β
β CITY OF LONDON β 2021 β 4008117.32 β 311 β
β CITY OF LONDON β 2025 β 3954212.39 β 56 β
β CITY OF LONDON β 2014 β 3914057.39 β 416 β
β CITY OF LONDON β 2022 β 3700867.19 β 290 β
β CITY OF WESTMINSTER β 2018 β 3562457.76 β 3346 β
βββββββββββββββββββββββ΄βββββββ΄ββββββββββββββ΄ββββββββββββββ
The City of London consistently appears as the most expensive district! You'll notice the AI created a reasonable query, though the results are ordered by average price rather than chronologically. For a year-over-year analysis, we might refine your question to ask specifically for "the most expensive district each year" to get results grouped differently. | {"source_file": "ai-powered-sql-generation.md"} | [
0.07319741696119308,
-0.03828023746609688,
0.05146903544664383,
0.006363284774124622,
-0.014122667722404003,
-0.008626740425825119,
-0.037778228521347046,
-0.0043841274455189705,
-0.0979166030883789,
0.017295805737376213,
-0.021761104464530945,
-0.10192514955997467,
0.019377537071704865,
0... |
78894350-a39b-4103-bfd9-c9c191e52b40 | slug: /use-cases/observability/oss-monitoring
title: 'Self-managed monitoring'
sidebar_label: 'Self-managed monitoring'
description: 'Self-Managed Monitoring Guide'
doc_type: 'guide'
keywords: ['observability', 'monitoring', 'self-managed', 'metrics', 'system health']
import ObservabilityIntegrations from '@site/docs/_snippets/_observability_integration_options.md';
import DirectIntegrations from '@site/docs/_snippets/_direct_observability_integration_options.md';
import CommunityMonitoring from '@site/docs/_snippets/_community_monitoring.md';
Self-managed monitoring {#cloud-monitoring}
This guide provides enterprise teams evaluating ClickHouse open-source with comprehensive information on monitoring and observability capabilities for production deployments. Enterprise customers frequently ask about out-of-the-box monitoring features, integration with existing observability stacks including tools like Datadog and AWS CloudWatch, and how ClickHouseβss monitoring compares to self-hosted deployments.
Prometheus-based integration architecture {#prometheus}
ClickHouse exposes Prometheus-compatible metrics through different endpoints depending on your deployment model, each with distinct operational characteristics:
Self-Managed/OSS ClickHouse
Direct server Prometheus endpoint accessible via the standard /metrics endpoint on your ClickHouse server. This approach provides:
- Complete metric exposure: Full range of available ClickHouse metrics without built-in filtering
- Real-time metrics: Generated directly from system tables when scraped
Direct system access
Queries production system tables, which adds monitoring load and prevents cost-saving idle states
ClickStack deployment options {#clickstack-deployment}
Helm
: Recommended for Kubernetes-based debugging environments. Allows for environment-specific configuration, resource limits, and scaling via
values.yaml
.
Docker Compose
: Deploys each component (ClickHouse, HyperDX, OTel collector, MongoDB) individually.
HyperDX Only
: Standalone HyperDX container.
For complete deployment options and architecture details, see the
ClickStack documentation
and
data ingestion guide
. | {"source_file": "self-managed-monitoring.md"} | [
-0.003415627870708704,
-0.036547303199768066,
-0.05701768398284912,
0.043024323880672455,
0.07835771143436432,
-0.08628663420677185,
0.034879110753536224,
-0.016499631106853485,
-0.06549722701311111,
0.03812790289521217,
0.03740483522415161,
-0.047703664749860764,
0.031064195558428764,
-0.... |
b0b204a5-de47-40ff-a83d-47ce7213e5bf | slug: /use-cases/observability/cloud-monitoring
title: 'ClickHouse Cloud monitoring'
sidebar_label: 'ClickHouse Cloud monitoring'
description: 'ClickHouse Cloud Monitoring Guide'
doc_type: 'guide'
keywords: ['observability', 'monitoring', 'cloud', 'metrics', 'system health']
import AdvancedDashboard from '@site/static/images/cloud/manage/monitoring/advanced_dashboard.png';
import NativeAdvancedDashboard from '@site/static/images/cloud/manage/monitoring/native_advanced_dashboard.png';
import Image from '@theme/IdealImage';
import ObservabilityIntegrations from '@site/docs/_snippets/_observability_integration_options.md';
import DirectIntegrations from '@site/docs/_snippets/_direct_observability_integration_options.md';
import CommunityMonitoring from '@site/docs/_snippets/_community_monitoring.md';
ClickHouse Cloud monitoring {#cloud-monitoring}
This guide provides enterprise teams evaluating ClickHouse Cloud with comprehensive information on monitoring and observability capabilities for production deployments. Enterprise customers frequently ask about out-of-the-box monitoring features, integration with existing observability stacks including tools like Datadog and AWS CloudWatch, and how ClickHouseβs monitoring compares to self-hosted deployments.
Advanced observability dashboard {#advanced-observability}
ClickHouse Cloud provides comprehensive monitoring through built-in dashboard interfaces accessible via the Monitoring section. These dashboards visualize system and performance metrics in real-time without requiring additional setup and serve as the primary tools for real-time production monitoring within ClickHouse Cloud.
Advanced Dashboard
: The main dashboard interface accessible via Monitoring β Advanced dashboard provides real-time visibility into query rates, resource usage, system health, and storage performance. This dashboard doesn't require separate authentication, won't prevent instances from idling, and doesn't add query load to your production system. Each visualization is powered by customizable SQL queries, with out-of-the-box charts grouped into ClickHouse-specific, system health, and Cloud-specific metrics. Users can extend monitoring by creating custom queries directly in the SQL console.
:::note
Accessing these metrics does not issue a query to the underlying service and will not wake idle services.
:::
Users looking to extend these visualizations can use the dashboards feature in ClickHouse Cloud, querying system tables directly.
Native advanced dashboard
: An alternative dashboard interface accessible through "You can still access the native advanced dashboard" within the Monitoring section. This opens in a separate tab with authentication and provides an alternative UI for system and service health monitoring. This dashboard allows advanced analytics, where users can modify the underlying SQL queries. | {"source_file": "cloud-monitoring.md"} | [
0.03242228925228119,
0.010120752267539501,
-0.03585018590092659,
0.02639525942504406,
0.10957972705364227,
-0.07338806241750717,
0.0389334037899971,
0.0010547131532803178,
-0.05665986239910126,
0.06946500390768051,
0.06023581326007843,
-0.06317642331123352,
0.0645931139588356,
0.0501816272... |
d055475b-8a05-4124-aadd-6f2f81b916dc | Both dashboards offer immediate visibility into service health and performance without external dependencies, distinguishing them from external debugging-focused tools like ClickStack.
For detailed dashboard features and available metrics, see the
advanced dashboard documentation
.
Query insights and resource monitoring {#query-insights}
ClickHouse Cloud includes additional monitoring capabilities:
Query Insights: Built-in interface for query performance analysis and troubleshooting
Resource Utilization Dashboard: Tracks memory, CPU allocation, and data transfer patterns. CPU usage and memory usage graphs show the maximum utilization metric in a particular time period. The CPU usage graph shows a system-level CPU utilization metric (NOT a ClickHouse CPU utilization metric).
See the
query insights
and
resource utilization
documentation for detailed features.
Prometheus-compatible metrics endpoint {#prometheus}
ClickHouse Cloud provides a Prometheus endpoint. This allows users to maintain current workflows, leverage existing team expertise, and integrate ClickHouse metrics into enterprise monitoring platforms including Grafana, Datadog, and other Prometheus-compatible tools.
The organization-level endpoint federates metrics from all services, while per-service endpoints provide granular monitoring. Key features include:
- Filtered metrics option: The optional filtered_metrics=true parameter reduces payload from 1000+ available metrics to 125 'mission critical' metrics for cost optimization and easier monitoring focus
- Cached metric delivery: Uses materialized views refreshed every minute to minimize query load on production systems
:::note
This approach respects service idling behavior, allowing for cost optimization when services are not actively processing queries. This API endpoint relies on ClickHouse Cloud API credentials. For complete endpoint configuration details, see the cloud
Prometheus documentation
.
:::
ClickStack deployment options {#clickstack-deployment}
HyperDX in Clickhouse Cloud
(private preview): HyperDX can be launched on any Clickhouse Cloud service.
Helm
: Recommended for Kubernetes-based debugging environments. Supports integration with ClickHouse Cloud and allows for environment-specific configuration, resource limits, and scaling via
values.yaml
.
Docker Compose
: Deploys each component (ClickHouse, HyperDX, OTel collector, MongoDB) individually. Users can modify the compose file to remove any unused components when integrating with ClickHouse Cloud, specifically ClickHouse and the Open Telemetry Collector.
HyperDX Only
: Standalone HyperDX container.
For complete deployment options and architecture details, see the
ClickStack documentation
and
data ingestion guide
.
:::note
Users can also collect metrics from the ClickHouse Cloud Prometheus endpoint via an OpenTelemetry Collector and forward them to a separate ClickStack deployment for visualization.
::: | {"source_file": "cloud-monitoring.md"} | [
-0.05040443316102028,
-0.03922499716281891,
-0.061595767736434937,
0.018512874841690063,
0.03621410205960274,
-0.11562059074640274,
0.04828425869345665,
-0.008222275413572788,
0.02962278202176094,
0.016691964119672775,
-0.03908349201083183,
-0.0237082839012146,
0.01397131010890007,
0.01584... |
d171be41-2de9-499a-a043-9c9eeaa3277a | :::note
Users can also collect metrics from the ClickHouse Cloud Prometheus endpoint via an OpenTelemetry Collector and forward them to a separate ClickStack deployment for visualization.
:::
System impact considerations {#system-impact}
All of the above approaches use a mixture of either relying on Prometheus endpoints, being managed by ClickHouse Cloud, or querying of system tables directly.
The latter of these options relies on querying the production ClickHouse service. This adds query load to the system under observation and prevents ClickHouse Cloud instances from idling, impacting cost optimization. Additionally, if the production system fails, monitoring may also be affected, since the two are coupled. This approach works well for deep introspection and debugging but is less appropriate for real-time production monitoring. Consider these trade-offs between detailed system analysis capabilities and operational overhead when evaluating direct Grafana integration versus the external tool integration approaches discussed in the following section. | {"source_file": "cloud-monitoring.md"} | [
-0.07808102667331696,
-0.018952226266264915,
-0.02135520987212658,
0.006877236068248749,
0.008498906157910824,
-0.15630361437797546,
-0.03720724210143089,
-0.003132663434371352,
-0.014659658074378967,
0.054521847516298294,
-0.04159240797162056,
-0.06786403805017471,
-0.00335917086340487,
-... |
d7ee9b81-4edc-4cc1-af0a-80c50e80311c | slug: /use-cases/observability
title: 'Observability'
pagination_prev: null
pagination_next: null
description: 'Landing page for the Observability use case guide'
keywords: ['observability', 'logs', 'traces', 'metrics', 'OpenTelemetry', 'Grafana', 'OTel']
doc_type: 'guide'
ClickHouse offers unmatched speed, scale, and cost-efficiency for observability. This guide provides two paths depending on your needs:
ClickStack - the ClickHouse observability stack {#clickstack}
The ClickHouse Observability Stack is our
recommended approach
for most users.
ClickStack
is a production-grade observability platform built on ClickHouse and OpenTelemetry (OTel), unifying logs, traces, metrics and session in a single high-performance scalable solution that works from single-node deployments to
multi-petabyte
scale.
| Section | Description |
|---------|-------------|
|
Overview
| Introduction to ClickStack and its key features |
|
Getting Started
| Quick start guide and basic setup instructions |
|
Example Datasets
| Sample datasets and use cases |
|
Architecture
| System architecture and components overview |
|
Deployment
| Deployment guides and options |
|
Configuration
| Detailed configuration options and settings |
|
Ingesting Data
| Guidelines for ingesting data to ClickStack |
|
Search
| How to search and query your observability data |
|
Production
| Best practices for production deployment |
Build-your-own stack {#build-your-own-stack}
For users with
custom requirements
β such as highly specialized ingestion pipelines, schema designs, or extreme scaling needs β we provide guidance to build a custom observability stack with ClickHouse as the core database. | {"source_file": "index.md"} | [
-0.04273652285337448,
-0.054020438343286514,
-0.038004547357559204,
-0.012852302752435207,
-0.04105694219470024,
-0.07396583259105682,
-0.0025977976620197296,
-0.005158842075616121,
-0.07456136494874954,
0.030774511396884918,
0.029403818771243095,
0.01742994599044323,
0.032357197254896164,
... |
a49ebaf6-7322-47a4-b110-837a224a01a9 | | Page | Description |
|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Introduction
| This guide is designed for users looking to build their own observability solution using ClickHouse, focusing on logs and traces. |
|
Schema design
| Learn why users are recommended to create their own schema for logs and traces, along with some best practices for doing so. |
|
Managing data
| Deployments of ClickHouse for observability invariably involve large datasets, which need to be managed. ClickHouse offers features to assist with data management. |
|
Integrating OpenTelemetry
| Collecting and exporting logs and traces using OpenTelemetry with ClickHouse. |
|
Using Visualization Tools
| Learn how to use observability visualization tools for ClickHouse, including HyperDX and Grafana. |
|
Demo Application
| Explore the OpenTelemetry demo application forked to work with ClickHouse for logs and traces. | | {"source_file": "index.md"} | [
-0.028623318299651146,
0.07186510413885117,
-0.0010045311646535993,
0.021435556933283806,
-0.025794483721256256,
0.08885795623064041,
-0.0018362681148573756,
0.016397938132286072,
0.02181084081530571,
-0.05627714842557907,
0.010358192026615143,
0.0484403632581234,
0.007398650515824556,
-0.... |
17a1468a-a184-4132-80e7-6d50896aa653 | slug: /use-cases/data-lake/unity-catalog
sidebar_label: 'Unity catalog'
title: 'Unity catalog'
pagination_prev: null
pagination_next: null
description: 'In this guide, we will walk you through the steps to query
your data in S3 buckets using ClickHouse and the Unity Catalog.'
keywords: ['Unity', 'Data Lake']
show_related_blogs: true
doc_type: 'guide'
import BetaBadge from '@theme/badges/BetaBadge';
:::note
Integration with the Unity Catalog works for managed and external tables.
This integration is currently only supported on AWS.
:::
ClickHouse supports integration with multiple catalogs (Unity, Glue, Polaris, etc.). This guide will walk you through the steps to query your data managed by Databricks using ClickHouse and the
Unity Catalog
.
Databricks supports multiple data formats for their lakehouse. With ClickHouse, you can query Unity Catalog tables as both Delta and Iceberg.
:::note
As this feature is experimental, you will need to enable it using:
SET allow_experimental_database_unity_catalog = 1;
:::
Configuring Unity in Databricks {#configuring-unity-in-databricks}
To allow ClickHouse to interact with the Unity catalog, you need to make sure the Unity Catalog is configured to allow interaction with an external reader. This can be achieved by following the
"Enable external data access to Unity Catalog"
guide.
In addition to enabling external access, ensure the principal configuring the integration has the
EXTERNAL USE SCHEMA
privilege
on the schema containing the tables.
Once your catalog is configured, you must generate credentials for ClickHouse. Two different methods can be used, depending on your interaction mode with Unity:
For Iceberg clients, use authentication as a
service principal
.
For Delta clients, use a Personal Access Token (
PAT
).
Creating a connection between Unity Catalog and ClickHouse {#creating-a-connection-between-unity-catalog-and-clickhouse}
With your Unity Catalog configured and authentication in place, establish a connection between ClickHouse and Unity Catalog.
Read Delta {#read-delta}
sql
CREATE DATABASE unity
ENGINE = DataLakeCatalog('https://<workspace-id>.cloud.databricks.com/api/2.1/unity-catalog')
SETTINGS warehouse = 'CATALOG_NAME', catalog_credential = '<PAT>', catalog_type = 'unity'
Read Iceberg {#read-iceberg}
sql
CREATE DATABASE unity
ENGINE = DataLakeCatalog('https://<workspace-id>.cloud.databricks.com/api/2.1/unity-catalog/iceberg')
SETTINGS catalog_type = 'rest', catalog_credential = '<client-id>:<client-secret>', warehouse = 'workspace',
oauth_server_uri = 'https://<workspace-id>.cloud.databricks.com/oidc/v1/token', auth_scope = 'all-apis,sql'
Querying Unity catalog tables using ClickHouse {#querying-unity-catalog-tables-using-clickhouse}
Now that the connection is in place, you can start querying via the Unity catalog. For example:
```sql
USE unity;
SHOW TABLES; | {"source_file": "unity_catalog.md"} | [
0.016367726027965546,
-0.04051130264997482,
-0.054602134972810745,
0.008512072265148163,
0.03266718611121178,
0.06773553788661957,
0.026384424418210983,
0.007855048403143883,
-0.024396410211920738,
-0.010635608807206154,
0.0318833626806736,
-0.02419183775782585,
0.11465480923652649,
-0.076... |
002a811e-04d9-4169-9d1a-24f33c4ead95 | Now that the connection is in place, you can start querying via the Unity catalog. For example:
```sql
USE unity;
SHOW TABLES;
ββnameββββββββββββββββββββββββββββββββββββββββββββββββ
β clickbench.delta_hits β
β demo.fake_user β
β information_schema.catalog_privileges β
β information_schema.catalog_tags β
β information_schema.catalogs β
β information_schema.check_constraints β
β information_schema.column_masks β
β information_schema.column_tags β
β information_schema.columns β
β information_schema.constraint_column_usage β
β information_schema.constraint_table_usage β
β information_schema.information_schema_catalog_name β
β information_schema.key_column_usage β
β information_schema.parameters β
β information_schema.referential_constraints β
β information_schema.routine_columns β
β information_schema.routine_privileges β
β information_schema.routines β
β information_schema.row_filters β
β information_schema.schema_privileges β
β information_schema.schema_tags β
β information_schema.schemata β
β information_schema.table_constraints β
β information_schema.table_privileges β
β information_schema.table_tags β
β information_schema.tables β
β information_schema.views β
β information_schema.volume_privileges β
β information_schema.volume_tags β
β information_schema.volumes β
β uniform.delta_hits β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
If you're using the Iceberg client, only the Delta tables with Uniform-enabled will be shown:
```sql
SHOW TABLES
ββnameββββββββββββββββ
β uniform.delta_hits β
ββββββββββββββββββββββ
```
To query a table:
sql
SELECT count(*) FROM `uniform.delta_hits`
:::note Backticks required
Backticks are required because ClickHouse doesn't support more than one namespace.
:::
To inspect the table DDL:
``sql
SHOW CREATE TABLE
uniform.delta_hits`
CREATE TABLE unity_uniform.
uniform.delta_hits
(
WatchID
Int64,
JavaEnable
Int32,
Title
String,
GoodEvent
Int32,
EventTime
DateTime64(6, 'UTC'),
EventDate
Date,
CounterID
Int32,
ClientIP
Int32,
...
FromTag
String,
HasGCLID
Int32,
RefererHash
Int64,
URLHash
Int64,
CLID
Int32
)
ENGINE = Iceberg('s3://
);
```
Loading data from your Data Lake into ClickHouse {#loading-data-from-your-data-lake-into-clickhouse} | {"source_file": "unity_catalog.md"} | [
0.04888647049665451,
-0.06893181055784225,
-0.1640840470790863,
0.03324026241898537,
-0.07461598515510559,
-0.03458767384290695,
0.056907523423433304,
0.012214653193950653,
-0.1470005214214325,
0.033819954842329025,
0.012986939400434494,
-0.06538432836532593,
0.050489846616983414,
-0.02089... |
44c16170-d5c8-4728-9f21-33e68b62cf98 | ```
Loading data from your Data Lake into ClickHouse {#loading-data-from-your-data-lake-into-clickhouse}
If you need to load data from Databricks into ClickHouse, start by creating a local ClickHouse table:
sql
CREATE TABLE hits
(
`WatchID` Int64,
`JavaEnable` Int32,
`Title` String,
`GoodEvent` Int32,
`EventTime` DateTime64(6, 'UTC'),
`EventDate` Date,
`CounterID` Int32,
`ClientIP` Int32,
...
`FromTag` String,
`HasGCLID` Int32,
`RefererHash` Int64,
`URLHash` Int64,
`CLID` Int32
)
PRIMARY KEY (CounterID, EventDate, UserID, EventTime, WatchID);
Then load the data from your Unity Catalog table via an
INSERT INTO SELECT
:
sql
INSERT INTO hits SELECT * FROM unity_uniform.`uniform.delta_hits`; | {"source_file": "unity_catalog.md"} | [
0.0371149480342865,
-0.03420150652527809,
-0.055620890110731125,
-0.00977572426199913,
-0.047836754471063614,
-0.008964826352894306,
0.035915765911340714,
0.04042617976665497,
-0.08377286791801453,
0.019499866291880608,
0.011241226457059383,
-0.0444403775036335,
0.0462956428527832,
-0.0846... |
a31a1adf-0f94-45cd-8c76-4487d783fe85 | slug: /use-cases/data-lake/onelake-catalog
sidebar_label: 'Fabric OneLake'
title: 'Fabric OneLake'
pagination_prev: null
pagination_next: null
description: 'In this guide, we will walk you through the steps to query your data in Microsoft OneLake.'
keywords: ['OneLake', 'Data Lake', 'Fabric']
show_related_blogs: true
doc_type: 'guide'
import BetaBadge from '@theme/badges/BetaBadge';
ClickHouse supports integration with multiple catalogs (OneLake, Unity, Glue, Polaris, etc.). This guide will walk you through the steps to query your data stored in Microsoft OneLake using ClickHouse and
OneLake
.
Microsoft OneLake supports multiple table formats for their lakehouse. With ClickHouse, you can query Iceberg tables.
:::note
As this feature is beta, you will need to enable it using:
SET allow_database_iceberg = 1;
:::
Gathering Requirements OneLake {#gathering-requirements}
Before querying your table in Microsoft Fabric, you'll need to collect the following information:
A OneLake tenant ID (Your Entra ID)
A client ID
A client secret
A warehouse ID and a data item ID
See
Microsoft OneLake's documentation
for help finding these values.
Creating a connection between OneLake and ClickHouse {#creating-a-connection-between-unity-catalog-and-clickhouse}
With the required info above you can now create a connection between Microsoft OneLake and ClickHouse, but before that you need to enable catalogs:
sql
SET allow_database_iceberg=1
Connect to OneLake {#connect-onelake}
sql
CREATE DATABASE onelake_catalog
ENGINE = DataLakeCatalog('https://onelake.table.fabric.microsoft.com/iceberg')
SETTINGS
catalog_type = 'onelake',
warehouse = 'warehouse_id/data_item_id',
onelake_tenant_id = '<tenant_id>',
oauth_server_uri = 'https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/token',
auth_scope = 'https://storage.azure.com/.default',
onelake_client_id = '<client_id>',
onelake_client_secret = '<client_secret>'
Querying OneLake using ClickHouse {#querying-onelake-using-clickhouse}
Now that the connection is in place, you can start querying OneLake:
```sql
SHOW TABLES FROM onelake_catalog
Query id: 8f6124c4-45c2-4351-b49a-89dc13e548a7
ββnameβββββββββββββββββββββββββββ
1. β year_2017.green_tripdata_2017 β
2. β year_2018.green_tripdata_2018 β
3. β year_2019.green_tripdata_2019 β
4. β year_2020.green_tripdata_2020 β
5. β year_2022.green_tripdata_2022 β
βββββββββββββββββββββββββββββββββ
```
If you're using the Iceberg client, only the Delta tables with Uniform-enabled will be shown:
To query a table:
``sql
SELECT *
FROM onelake_catalog.
year_2017.green_tripdata_2017`
LIMIT 1
Query id: db6b4bda-cc58-4ca1-8891-e0d14f02c890 | {"source_file": "onelake_catalog.md"} | [
0.04773430526256561,
0.0010843289783224463,
-0.01784418150782585,
0.0389203205704689,
0.012302227318286896,
0.0781865045428276,
0.026199793443083763,
-0.010822840966284275,
-0.08102589100599289,
-0.006126509513705969,
0.022383669391274452,
-0.028274884447455406,
0.105659618973732,
-0.01820... |
2863095b-85fd-4632-9aba-304c7f8c412b | To query a table:
``sql
SELECT *
FROM onelake_catalog.
year_2017.green_tripdata_2017`
LIMIT 1
Query id: db6b4bda-cc58-4ca1-8891-e0d14f02c890
Row 1:
ββββββ
VendorID: 2
lpep_pickup_datetime: 2017-05-18 16:55:43.000000
lpep_dropoff_datetime: 2017-05-18 18:04:11.000000
store_and_fwd_flag: N
RatecodeID: 2
PULocationID: 130
DOLocationID: 48
passenger_count: 2
trip_distance: 12.43
fare_amount: 52
extra: 4.5
mta_tax: 0.5
tip_amount: 0
tolls_amount: 33
ehail_fee: α΄Ία΅α΄Έα΄Έ
improvement_surcharge: 0.3
total_amount: 90.3
payment_type: 2
trip_type: 1
congestion_surcharge: α΄Ία΅α΄Έα΄Έ
source_file: green_tripdata_2017-05.parquet
```
:::note Backticks required
Backticks are required because ClickHouse doesn't support more than one namespace.
:::
To inspect the table DDL:
``sql
SHOW CREATE TABLE onelake_catalog.
year_2017.green_tripdata_2017`
Query id: 8bd5bd8e-83be-453e-9a88-32de12ba7f24 | {"source_file": "onelake_catalog.md"} | [
0.07932141423225403,
0.003211893606930971,
0.0055632502771914005,
0.03886799514293671,
-0.01798904500901699,
0.0046807099133729935,
0.05159628018736839,
0.0043216547928750515,
-0.0325346440076828,
0.012885259464383125,
0.07595761865377426,
-0.11667748540639877,
-0.023417362943291664,
-0.09... |
d814c420-96d2-4d33-b40d-94ec61ec8a66 | ββstatementββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1. β CREATE TABLE onelake_catalog.
year_2017.green_tripdata_2017
β΄β
ββ³( β΄β
ββ³
VendorID
Nullable(Int64), β΄β
ββ³
lpep_pickup_datetime
Nullable(DateTime64(6, 'UTC')), β΄β
ββ³
lpep_dropoff_datetime
Nullable(DateTime64(6, 'UTC')), β΄β
ββ³
store_and_fwd_flag
Nullable(String), β΄β
ββ³
RatecodeID
Nullable(Int64), β΄β
ββ³
PULocationID
Nullable(Int64), β΄β
ββ³
DOLocationID
Nullable(Int64), β΄β
ββ³
passenger_count
Nullable(Int64), β΄β
ββ³
trip_distance
Nullable(Float64), β΄β
ββ³
fare_amount
Nullable(Float64), β΄β
ββ³
extra
Nullable(Float64), β΄β
ββ³
mta_tax
Nullable(Float64), β΄β
ββ³
tip_amount
Nullable(Float64), β΄β
ββ³
tolls_amount | {"source_file": "onelake_catalog.md"} | [
0.06406445056200027,
0.022889966145157814,
-0.008358280174434185,
-0.006640339735895395,
-0.0432722233235836,
0.028686977922916412,
-0.0006230557337403297,
0.021131029352545738,
-0.07686907052993774,
0.019908063113689423,
0.14427156746387482,
-0.14784160256385803,
0.014467664994299412,
-0.... |
b4169b0e-d732-4841-abeb-35da96195380 | tip_amount
Nullable(Float64), β΄β
ββ³
tolls_amount
Nullable(Float64), β΄β
ββ³
ehail_fee
Nullable(Float64), β΄β
ββ³
improvement_surcharge
Nullable(Float64), β΄β
ββ³
total_amount
Nullable(Float64), β΄β
ββ³
payment_type
Nullable(Int64), β΄β
ββ³
trip_type
Nullable(Int64), β΄β
ββ³
congestion_surcharge
Nullable(Float64), β΄β
ββ³
source_file
Nullable(String) β΄β
ββ³) β΄β
ββ³ENGINE = Iceberg('abfss://
@onelake.dfs.fabric.microsoft.com/
/Tables/year_2017/green_tripdata_2017') β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | {"source_file": "onelake_catalog.md"} | [
0.005792622920125723,
0.07173751294612885,
-0.03717707097530365,
0.02531939372420311,
0.05435303598642349,
-0.011607948690652847,
0.045328572392463684,
0.03867928683757782,
-0.03605364263057709,
0.06837217509746552,
0.09711770713329315,
-0.14289432764053345,
0.003912803716957569,
-0.014935... |
3fdba888-7e00-4a38-9f17-34ba67e7864b | Loading data from your Data Lake into ClickHouse {#loading-data-from-onelake-into-clickhouse}
If you need to load data from OneLake into ClickHouse:
``sql
CREATE TABLE trips
ENGINE = MergeTree
ORDER BY coalesce(VendorID, 0)
AS SELECT *
FROM onelake_catalog.
year_2017.green_tripdata_2017`
Query id: d15983a6-ef6a-40fe-80d5-19274b9fe328
Ok.
0 rows in set. Elapsed: 32.570 sec. Processed 11.74 million rows, 275.37 MB (360.36 thousand rows/s., 8.45 MB/s.)
Peak memory usage: 1.31 GiB.
``` | {"source_file": "onelake_catalog.md"} | [
0.08094809949398041,
-0.02270466461777687,
0.021357130259275436,
-0.0025401690509170294,
-0.07601873576641083,
-0.08785959333181381,
0.013087410479784012,
0.017539184540510178,
-0.08797777444124222,
0.032008737325668335,
0.023864490911364555,
-0.002045780885964632,
0.0030447060707956553,
-... |
d7b05ce1-7c93-4119-a6ae-3a35dcbfed22 | slug: /use-cases/data-lake/nessie-catalog
sidebar_label: 'Nessie catalog'
title: 'Nessie catalog'
pagination_prev: null
pagination_next: null
description: 'In this guide, we will walk you through the steps to query
your data using ClickHouse and the Nessie Catalog.'
keywords: ['Nessie', 'REST', 'Transactional', 'Data Lake', 'Iceberg', 'Git-like']
show_related_blogs: true
doc_type: 'guide'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
:::note
Integration with the Nessie Catalog works with Iceberg tables only.
This integration supports both AWS S3 and other cloud storage providers.
:::
ClickHouse supports integration with multiple catalogs (Unity, Glue, REST, Polaris, etc.). This guide will walk you through the steps to query your data using ClickHouse and the
Nessie
catalog.
Nessie is an open-source transactional catalog for data lakes that provides:
-
Git-inspired
data version control with branches and commits
-
Cross-table transactions
and visibility guarantees
-
REST API
compliance with the Iceberg REST catalog specification
-
Open data lake
approach supporting Hive, Spark, Dremio, Trino, and more
-
Production-ready
deployment on Docker or Kubernetes
:::note
As this feature is experimental, you will need to enable it using:
SET allow_experimental_database_iceberg = 1;
:::
Local Development Setup {#local-development-setup}
For local development and testing, you can use a containerized Nessie setup. This approach is ideal for learning, prototyping, and development environments.
Prerequisites {#local-prerequisites}
Docker and Docker Compose
: Ensure Docker is installed and running
Sample Setup
: You can use the official Nessie docker-compose setup
Setting up Local Nessie Catalog {#setting-up-local-nessie-catalog}
You can use the official
Nessie docker-compose setup
which provides a complete environment with Nessie, in-memory version store, and MinIO for object storage.
Step 1:
Create a new folder in which to run the example, then create a file
docker-compose.yml
with the following configuration:
```yaml
version: '3.8' | {"source_file": "nessie_catalog.md"} | [
-0.015241927467286587,
-0.030525831505656242,
-0.014074880629777908,
0.010927198454737663,
-0.010413631796836853,
0.02260785549879074,
-0.03419661894440651,
0.015598504804074764,
-0.01254849974066019,
0.023903314024209976,
0.04217566177248955,
-0.017135199159383774,
0.08962564170360565,
-0... |
c60f267b-db8a-49b2-acce-2174ae8d29ac | Step 1:
Create a new folder in which to run the example, then create a file
docker-compose.yml
with the following configuration:
```yaml
version: '3.8'
services:
nessie:
image: ghcr.io/projectnessie/nessie:latest
ports:
- "19120:19120"
environment:
- nessie.version.store.type=IN_MEMORY
- nessie.catalog.default-warehouse=warehouse
- nessie.catalog.warehouses.warehouse.location=s3://my-bucket/
- nessie.catalog.service.s3.default-options.endpoint=http://minio:9000/
- nessie.catalog.service.s3.default-options.access-key=urn:nessie-secret:quarkus:nessie.catalog.secrets.access-key
- nessie.catalog.service.s3.default-options.path-style-access=true
- nessie.catalog.service.s3.default-options.auth-type=STATIC
- nessie.catalog.secrets.access-key.name=admin
- nessie.catalog.secrets.access-key.secret=password
- nessie.catalog.service.s3.default-options.region=us-east-1
- nessie.server.authentication.enabled=false
depends_on:
minio:
condition: service_healthy
networks:
- iceberg_net
minio:
image: quay.io/minio/minio
ports:
- "9002:9000"
- "9003:9001"
environment:
- MINIO_ROOT_USER=admin
- MINIO_ROOT_PASSWORD=password
- MINIO_REGION=us-east-1
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 5s
timeout: 10s
retries: 5
start_period: 30s
entrypoint: >
/bin/sh -c "
minio server /data --console-address ':9001' &
sleep 10;
mc alias set myminio http://localhost:9000 admin password;
mc mb myminio/my-bucket --ignore-existing;
tail -f /dev/null"
networks:
- iceberg_net
clickhouse:
image: clickhouse/clickhouse-server:head
container_name: nessie-clickhouse
user: '0:0' # Ensures root permissions
ports:
- "8123:8123"
- "9000:9000"
volumes:
- clickhouse_data:/var/lib/clickhouse
- ./clickhouse/data_import:/var/lib/clickhouse/data_import # Mount dataset folder
networks:
- iceberg_net
environment:
- CLICKHOUSE_DB=default
- CLICKHOUSE_USER=default
- CLICKHOUSE_DO_NOT_CHOWN=1
- CLICKHOUSE_PASSWORD=
depends_on:
nessie:
condition: service_started
minio:
condition: service_healthy
volumes:
clickhouse_data:
networks:
iceberg_net:
driver: bridge
```
Step 2:
Run the following command to start the services:
bash
docker compose up -d
Step 3:
Wait for all services to be ready. You can check the logs:
bash
docker-compose logs -f
:::note
The Nessie setup uses an in-memory version store and requires that sample data be loaded into the Iceberg tables first. Make sure the environment has created and populated the tables before attempting to query them through ClickHouse.
:::
Connecting to Local Nessie Catalog {#connecting-to-local-nessie-catalog}
Connect to your ClickHouse container: | {"source_file": "nessie_catalog.md"} | [
0.0025352940429002047,
-0.06737565994262695,
-0.049000516533851624,
-0.06072157993912697,
0.01872977614402771,
-0.0070024398155510426,
-0.10895359516143799,
-0.031284112483263016,
0.006815451662987471,
0.029623206704854965,
0.007849163375794888,
-0.038966573774814606,
0.0032281705643981695,
... |
8dc80819-71b8-4452-a25e-e822440c3f0a | Connecting to Local Nessie Catalog {#connecting-to-local-nessie-catalog}
Connect to your ClickHouse container:
bash
docker exec -it nessie-clickhouse clickhouse-client
Then create the database connection to the Nessie catalog:
```sql
SET allow_experimental_database_iceberg = 1;
CREATE DATABASE demo
ENGINE = DataLakeCatalog('http://nessie:19120/iceberg', 'admin', 'password')
SETTINGS catalog_type = 'rest', storage_endpoint = 'http://minio:9002/my-bucket', warehouse = 'warehouse'
```
Querying Nessie catalog tables using ClickHouse {#querying-nessie-catalog-tables-using-clickhouse}
Now that the connection is in place, you can start querying via the Nessie catalog. For example:
```sql
USE demo;
SHOW TABLES;
```
If your setup includes sample data (such as the taxi dataset), you should see tables like:
sql title="Response"
ββnameβββββββββββ
β default.taxis β
βββββββββββββββββ
:::note
If you don't see any tables, this usually means:
1. The environment hasn't created the sample tables yet
2. The Nessie catalog service isn't fully initialized
3. The sample data loading process hasn't completed
You can check the Nessie logs to see the catalog activity:
bash
docker-compose logs nessie
:::
To query a table (if available):
sql
SELECT count(*) FROM `default.taxis`;
sql title="Response"
ββcount()ββ
β 2171187 β
βββββββββββ
:::note Backticks required
Backticks are required because ClickHouse doesn't support more than one namespace.
:::
To inspect the table DDL:
sql
SHOW CREATE TABLE `default.taxis`; | {"source_file": "nessie_catalog.md"} | [
0.07582482695579529,
-0.02350533939898014,
-0.021158887073397636,
0.0034405565820634365,
-0.09477970749139786,
-0.0276379082351923,
0.0036072770599275827,
0.011816328391432762,
-0.07293743640184402,
-0.014499550685286522,
0.00919591635465622,
-0.09586462378501892,
0.049789074808359146,
-0.... |
07b7195d-03a2-447f-a776-30d1ae0371b3 | :::note Backticks required
Backticks are required because ClickHouse doesn't support more than one namespace.
:::
To inspect the table DDL:
sql
SHOW CREATE TABLE `default.taxis`;
sql title="Response"
ββstatementββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE demo.`default.taxis` β
β ( β
β `VendorID` Nullable(Int64), β
β `tpep_pickup_datetime` Nullable(DateTime64(6)), β
β `tpep_dropoff_datetime` Nullable(DateTime64(6)), β
β `passenger_count` Nullable(Float64), β
β `trip_distance` Nullable(Float64), β
β `RatecodeID` Nullable(Float64), β
β `store_and_fwd_flag` Nullable(String), β
β `PULocationID` Nullable(Int64), β
β `DOLocationID` Nullable(Int64), β
β `payment_type` Nullable(Int64), β
β `fare_amount` Nullable(Float64), β
β `extra` Nullable(Float64), β
β `mta_tax` Nullable(Float64), β
β `tip_amount` Nullable(Float64), β
β `tolls_amount` Nullable(Float64), β
β `improvement_surcharge` Nullable(Float64), β
β `total_amount` Nullable(Float64), β
β `congestion_surcharge` Nullable(Float64), β
β `airport_fee` Nullable(Float64) β
β ) β
β ENGINE = Iceberg('http://localhost:9002/my-bucket/default/taxis/', 'admin', '[HIDDEN]') β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Loading data from your Data Lake into ClickHouse {#loading-data-from-your-data-lake-into-clickhouse}
If you need to load data from the Nessie catalog into ClickHouse, start by creating a local ClickHouse table: | {"source_file": "nessie_catalog.md"} | [
0.012929537333548069,
-0.07699647545814514,
-0.032761652022600174,
0.07289308309555054,
-0.07348871231079102,
0.04610983654856682,
0.04548081010580063,
-0.0176620464771986,
-0.04246284440159798,
0.04442403092980385,
0.09020424634218216,
-0.07915861904621124,
0.01745803840458393,
-0.0312821... |
7c276928-0464-4857-8fd1-da9d337aa22b | If you need to load data from the Nessie catalog into ClickHouse, start by creating a local ClickHouse table:
sql
CREATE TABLE taxis
(
`VendorID` Int64,
`tpep_pickup_datetime` DateTime64(6),
`tpep_dropoff_datetime` DateTime64(6),
`passenger_count` Float64,
`trip_distance` Float64,
`RatecodeID` Float64,
`store_and_fwd_flag` String,
`PULocationID` Int64,
`DOLocationID` Int64,
`payment_type` Int64,
`fare_amount` Float64,
`extra` Float64,
`mta_tax` Float64,
`tip_amount` Float64,
`tolls_amount` Float64,
`improvement_surcharge` Float64,
`total_amount` Float64,
`congestion_surcharge` Float64,
`airport_fee` Float64
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(tpep_pickup_datetime)
ORDER BY (VendorID, tpep_pickup_datetime, PULocationID, DOLocationID);
Then load the data from your Nessie catalog table via an
INSERT INTO SELECT
:
sql
INSERT INTO taxis
SELECT * FROM demo.`default.taxis`; | {"source_file": "nessie_catalog.md"} | [
0.061767641454935074,
-0.0017674182308837771,
0.02526819333434105,
0.002354447962716222,
-0.09840244799852371,
-0.0048482357524335384,
0.005432145204395056,
0.07076147943735123,
-0.09462101012468338,
0.041626062244176865,
0.06324492394924164,
-0.09011328965425491,
-0.007652281783521175,
-0... |
3220bc23-b7d8-4288-b00b-598d17fad2c5 | slug: /use-cases/data-lake/rest-catalog
sidebar_label: 'REST catalog'
title: 'REST catalog'
pagination_prev: null
pagination_next: null
description: 'In this guide, we will walk you through the steps to query
your data using ClickHouse and the REST Catalog.'
keywords: ['REST', 'Tabular', 'Data Lake', 'Iceberg']
show_related_blogs: true
doc_type: 'guide'
import BetaBadge from '@theme/badges/BetaBadge';
:::note
Integration with the REST Catalog works with Iceberg tables only.
This integration supports both AWS S3 and other cloud storage providers.
:::
ClickHouse supports integration with multiple catalogs (Unity, Glue, REST, Polaris, etc.). This guide will walk you through the steps to query your data using ClickHouse and the
REST Catalog
specification.
The REST Catalog is a standardized API specification for Iceberg catalogs, supported by various platforms including:
-
Local development environments
(using docker-compose setups)
-
Managed services
like Tabular.io
-
Self-hosted
REST catalog implementations
:::note
As this feature is experimental, you will need to enable it using:
SET allow_experimental_database_iceberg = 1;
:::
Local Development Setup {#local-development-setup}
For local development and testing, you can use a containerized REST catalog setup. This approach is ideal for learning, prototyping, and development environments.
Prerequisites {#local-prerequisites}
Docker and Docker Compose
: Ensure Docker is installed and running
Sample Setup
: You can use various docker-compose setups (see Alternative Docker Images below)
Setting up Local REST Catalog {#setting-up-local-rest-catalog}
You can use various containerized REST catalog implementations such as
Databricks docker-spark-iceberg
which provides a complete Spark + Iceberg + REST catalog environment with docker-compose, making it ideal for testing Iceberg integrations.
Step 1:
Create a new folder in which to run the example, then create a file
docker-compose.yml
with the configuration from
Databricks docker-spark-iceberg
.
Step 2:
Next, create a file
docker-compose.override.yml
and place the following ClickHouse container configuration into it:
```yaml
version: '3.8'
services:
clickhouse:
image: clickhouse/clickhouse-server:25.5.6
container_name: clickhouse
user: '0:0' # Ensures root permissions
ports:
- "8123:8123"
- "9002:9000"
volumes:
- ./clickhouse:/var/lib/clickhouse
- ./clickhouse/data_import:/var/lib/clickhouse/data_import # Mount dataset folder
networks:
- iceberg_net
environment:
- CLICKHOUSE_DB=default
- CLICKHOUSE_USER=default
- CLICKHOUSE_DO_NOT_CHOWN=1
- CLICKHOUSE_PASSWORD=
```
Step 3:
Run the following command to start the services:
bash
docker compose up
Step 4:
Wait for all services to be ready. You can check the logs:
bash
docker-compose logs -f | {"source_file": "rest_catalog.md"} | [
-0.011449034325778484,
-0.020928362384438515,
-0.03930157423019409,
-0.0008811361039988697,
0.013360520824790001,
0.038502391427755356,
-0.060196537524461746,
0.005009065382182598,
0.005715842358767986,
0.012907684780657291,
0.020900018513202667,
0.004266528878360987,
0.09460021555423737,
... |
ad07b7d1-bea1-4425-b68f-d5b91ec0df63 | Step 3:
Run the following command to start the services:
bash
docker compose up
Step 4:
Wait for all services to be ready. You can check the logs:
bash
docker-compose logs -f
:::note
The REST catalog setup requires that sample data be loaded into the Iceberg tables first. Make sure the Spark environment has created and populated the tables before attempting to query them through ClickHouse. The availability of tables depends on the specific docker-compose setup and sample data loading scripts.
:::
Connecting to Local REST Catalog {#connecting-to-local-rest-catalog}
Connect to your ClickHouse container:
bash
docker exec -it clickhouse clickhouse-client
Then create the database connection to the REST catalog:
```sql
SET allow_experimental_database_iceberg = 1;
CREATE DATABASE demo
ENGINE = DataLakeCatalog('http://rest:8181/v1', 'admin', 'password')
SETTINGS
catalog_type = 'rest',
storage_endpoint = 'http://minio:9000/lakehouse',
warehouse = 'demo'
```
Querying REST catalog tables using ClickHouse {#querying-rest-catalog-tables-using-clickhouse}
Now that the connection is in place, you can start querying via the REST catalog. For example:
```sql
USE demo;
SHOW TABLES;
```
If your setup includes sample data (such as the taxi dataset), you should see tables like:
sql title="Response"
ββnameβββββββββββ
β default.taxis β
βββββββββββββββββ
:::note
If you don't see any tables, this usually means:
1. The Spark environment hasn't created the sample tables yet
2. The REST catalog service isn't fully initialized
3. The sample data loading process hasn't completed
You can check the Spark logs to see the table creation progress:
bash
docker-compose logs spark
:::
To query a table (if available):
sql
SELECT count(*) FROM `default.taxis`;
sql title="Response"
ββcount()ββ
β 2171187 β
βββββββββββ
:::note Backticks required
Backticks are required because ClickHouse doesn't support more than one namespace.
:::
To inspect the table DDL:
sql
SHOW CREATE TABLE `default.taxis`; | {"source_file": "rest_catalog.md"} | [
0.04992754012346268,
-0.07097876816987991,
-0.025992536917328835,
-0.007795579265803099,
-0.043663471937179565,
-0.004065288230776787,
-0.044176891446113586,
0.015010327100753784,
-0.057276736944913864,
-0.014434797689318657,
-0.034281227737665176,
-0.045833323150873184,
0.02931583859026432,... |
25278767-59ec-4edb-b6b6-24934df35dce | :::note Backticks required
Backticks are required because ClickHouse doesn't support more than one namespace.
:::
To inspect the table DDL:
sql
SHOW CREATE TABLE `default.taxis`;
sql title="Response"
ββstatementββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE demo.`default.taxis` β
β ( β
β `VendorID` Nullable(Int64), β
β `tpep_pickup_datetime` Nullable(DateTime64(6)), β
β `tpep_dropoff_datetime` Nullable(DateTime64(6)), β
β `passenger_count` Nullable(Float64), β
β `trip_distance` Nullable(Float64), β
β `RatecodeID` Nullable(Float64), β
β `store_and_fwd_flag` Nullable(String), β
β `PULocationID` Nullable(Int64), β
β `DOLocationID` Nullable(Int64), β
β `payment_type` Nullable(Int64), β
β `fare_amount` Nullable(Float64), β
β `extra` Nullable(Float64), β
β `mta_tax` Nullable(Float64), β
β `tip_amount` Nullable(Float64), β
β `tolls_amount` Nullable(Float64), β
β `improvement_surcharge` Nullable(Float64), β
β `total_amount` Nullable(Float64), β
β `congestion_surcharge` Nullable(Float64), β
β `airport_fee` Nullable(Float64) β
β ) β
β ENGINE = Iceberg('http://minio:9000/lakehouse/warehouse/default/taxis/', 'admin', '[HIDDEN]') β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Loading data from your Data Lake into ClickHouse {#loading-data-from-your-data-lake-into-clickhouse}
If you need to load data from the REST catalog into ClickHouse, start by creating a local ClickHouse table: | {"source_file": "rest_catalog.md"} | [
0.012929537333548069,
-0.07699647545814514,
-0.032761652022600174,
0.07289308309555054,
-0.07348871231079102,
0.04610983654856682,
0.04548081010580063,
-0.0176620464771986,
-0.04246284440159798,
0.04442403092980385,
0.09020424634218216,
-0.07915861904621124,
0.01745803840458393,
-0.0312821... |
1182b7f7-4c98-4288-a7f4-be05960a0f8e | If you need to load data from the REST catalog into ClickHouse, start by creating a local ClickHouse table:
sql
CREATE TABLE taxis
(
`VendorID` Int64,
`tpep_pickup_datetime` DateTime64(6),
`tpep_dropoff_datetime` DateTime64(6),
`passenger_count` Float64,
`trip_distance` Float64,
`RatecodeID` Float64,
`store_and_fwd_flag` String,
`PULocationID` Int64,
`DOLocationID` Int64,
`payment_type` Int64,
`fare_amount` Float64,
`extra` Float64,
`mta_tax` Float64,
`tip_amount` Float64,
`tolls_amount` Float64,
`improvement_surcharge` Float64,
`total_amount` Float64,
`congestion_surcharge` Float64,
`airport_fee` Float64
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(tpep_pickup_datetime)
ORDER BY (VendorID, tpep_pickup_datetime, PULocationID, DOLocationID);
Then load the data from your REST catalog table via an
INSERT INTO SELECT
:
sql
INSERT INTO taxis
SELECT * FROM demo.`default.taxis`; | {"source_file": "rest_catalog.md"} | [
0.0690087080001831,
0.00040270143654197454,
0.015826571732759476,
-0.007079016417264938,
-0.10496096312999725,
0.01004656683653593,
-0.016525594517588615,
0.053023748099803925,
-0.07619071751832962,
0.031100433319807053,
0.06941045075654984,
-0.08197567611932755,
0.0002229477686341852,
-0.... |
30de7290-1b6a-46f7-8f32-cab6ff6de5b7 | description: 'Landing page for the Data Lake use case guide'
pagination_prev: null
pagination_next: null
slug: /use-cases/data-lake
title: 'Data Lake'
keywords: ['data lake', 'glue', 'unity', 'rest', 'OneLake']
doc_type: 'landing-page'
ClickHouse supports integration with multiple catalogs (Unity, Glue, REST, Polaris, etc.).
| Page | Description |
|-----|-----|
|
Querying data in S3 using ClickHouse and the Glue Data Catalog
| Query your data in S3 buckets using ClickHouse and the Glue Data Catalog. |
|
Querying data in S3 using ClickHouse and the Unity Data Catalog
| Query your using the Unity Catalog. |
|
Querying data in S3 using ClickHouse and the REST Catalog
| Query your data using the REST Catalog (Tabular.io). |
|
Querying data in S3 using ClickHouse and the Lakekeeper Catalog
| Query your data using the Lakekeeper Catalog. |
|
Querying data in S3 using ClickHouse and the Nessie Catalog
| Query your data using the Nessie Catalog with Git-like data version control. |
|
Querying data in Azure using ClickHouse and the OneLake Catalog
| Query your data in Microsoft OneLake using ClickHouse and the Iceberg table format. | | {"source_file": "index.md"} | [
0.014253715984523296,
-0.06826778501272202,
-0.03696928545832634,
-0.056083157658576965,
-0.007413518149405718,
0.017217928543686867,
-0.006803077179938555,
-0.02189575880765915,
-0.03473411127924919,
-0.0036306041292846203,
0.02504224330186844,
-0.02052401937544346,
0.1189589574933052,
-0... |
e14c6b36-a708-4119-956f-0b1f2a7fe166 | slug: /use-cases/data-lake/lakekeeper-catalog
sidebar_label: 'Lakekeeper catalog'
title: 'Lakekeeper catalog'
pagination_prev: null
pagination_next: null
description: 'In this guide, we will walk you through the steps to query
your data using ClickHouse and the Lakekeeper Catalog.'
keywords: ['Lakekeeper', 'REST', 'Tabular', 'Data Lake', 'Iceberg']
show_related_blogs: true
doc_type: 'guide'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
:::note
Integration with the Lakekeeper Catalog works with Iceberg tables only.
This integration supports both AWS S3 and other cloud storage providers.
:::
ClickHouse supports integration with multiple catalogs (Unity, Glue, REST, Polaris, etc.). This guide will walk you through the steps to query your data using ClickHouse and the
Lakekeeper
catalog.
Lakekeeper is an open-source REST catalog implementation for Apache Iceberg that provides:
-
Rust native
implementation for high performance and reliability
-
REST API
compliance with the Iceberg REST catalog specification
-
Cloud storage
integration with S3-compatible storage
:::note
As this feature is experimental, you will need to enable it using:
SET allow_experimental_database_iceberg = 1;
:::
Local Development Setup {#local-development-setup}
For local development and testing, you can use a containerized Lakekeeper setup. This approach is ideal for learning, prototyping, and development environments.
Prerequisites {#local-prerequisites}
Docker and Docker Compose
: Ensure Docker is installed and running
Sample Setup
: You can use the Lakekeeper docker-compose setup
Setting up Local Lakekeeper Catalog {#setting-up-local-lakekeeper-catalog}
You can use the official
Lakekeeper docker-compose setup
which provides a complete environment with Lakekeeper, PostgreSQL metadata backend, and MinIO for object storage.
Step 1:
Create a new folder in which to run the example, then create a file
docker-compose.yml
with the following configuration:
```yaml
version: '3.8'
services:
lakekeeper:
image: quay.io/lakekeeper/catalog:latest
environment:
- LAKEKEEPER__PG_ENCRYPTION_KEY=This-is-NOT-Secure!
- LAKEKEEPER__PG_DATABASE_URL_READ=postgresql://postgres:postgres@db:5432/postgres
- LAKEKEEPER__PG_DATABASE_URL_WRITE=postgresql://postgres:postgres@db:5432/postgres
- RUST_LOG=info
command: ["serve"]
healthcheck:
test: ["CMD", "/home/nonroot/lakekeeper", "healthcheck"]
interval: 1s
timeout: 10s
retries: 10
start_period: 30s
depends_on:
migrate:
condition: service_completed_successfully
db:
condition: service_healthy
minio:
condition: service_healthy
ports:
- 8181:8181
networks:
- iceberg_net | {"source_file": "lakekeeper_catalog.md"} | [
-0.0390450693666935,
0.0011892261682078242,
-0.024245621636509895,
-0.012406324967741966,
0.052773162722587585,
0.013057120144367218,
-0.05340324342250824,
0.0008192059467546642,
-0.013887890614569187,
0.00896515604108572,
0.03420038893818855,
-0.017686234787106514,
0.11246544122695923,
-0... |
821cb384-6bcd-4ae2-9888-51f5b2990623 | migrate:
image: quay.io/lakekeeper/catalog:latest-main
environment:
- LAKEKEEPER__PG_ENCRYPTION_KEY=This-is-NOT-Secure!
- LAKEKEEPER__PG_DATABASE_URL_READ=postgresql://postgres:postgres@db:5432/postgres
- LAKEKEEPER__PG_DATABASE_URL_WRITE=postgresql://postgres:postgres@db:5432/postgres
- RUST_LOG=info
restart: "no"
command: ["migrate"]
depends_on:
db:
condition: service_healthy
networks:
- iceberg_net
bootstrap:
image: curlimages/curl
depends_on:
lakekeeper:
condition: service_healthy
restart: "no"
command:
- -w
- "%{http_code}"
- "-X"
- "POST"
- "-v"
- "http://lakekeeper:8181/management/v1/bootstrap"
- "-H"
- "Content-Type: application/json"
- "--data"
- '{"accept-terms-of-use": true}'
- "-o"
- "/dev/null"
networks:
- iceberg_net
initialwarehouse:
image: curlimages/curl
depends_on:
lakekeeper:
condition: service_healthy
bootstrap:
condition: service_completed_successfully
restart: "no"
command:
- -w
- "%{http_code}"
- "-X"
- "POST"
- "-v"
- "http://lakekeeper:8181/management/v1/warehouse"
- "-H"
- "Content-Type: application/json"
- "--data"
- '{"warehouse-name": "demo", "project-id": "00000000-0000-0000-0000-000000000000", "storage-profile": {"type": "s3", "bucket": "warehouse-rest", "key-prefix": "", "assume-role-arn": null, "endpoint": "http://minio:9000", "region": "local-01", "path-style-access": true, "flavor": "minio", "sts-enabled": true}, "storage-credential": {"type": "s3", "credential-type": "access-key", "aws-access-key-id": "minio", "aws-secret-access-key": "ClickHouse_Minio_P@ssw0rd"}}'
- "-o"
- "/dev/null"
networks:
- iceberg_net
db:
image: bitnami/postgresql:16.3.0
environment:
- POSTGRESQL_USERNAME=postgres
- POSTGRESQL_PASSWORD=postgres
- POSTGRESQL_DATABASE=postgres
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -p 5432 -d postgres"]
interval: 2s
timeout: 10s
retries: 5
start_period: 10s
volumes:
- postgres_data:/bitnami/postgresql
networks:
- iceberg_net
minio:
image: bitnami/minio:2025.4.22
environment:
- MINIO_ROOT_USER=minio
- MINIO_ROOT_PASSWORD=ClickHouse_Minio_P@ssw0rd
- MINIO_API_PORT_NUMBER=9000
- MINIO_CONSOLE_PORT_NUMBER=9001
- MINIO_SCHEME=http
- MINIO_DEFAULT_BUCKETS=warehouse-rest
networks:
iceberg_net:
aliases:
- warehouse-rest.minio
ports:
- "9002:9000"
- "9003:9001"
healthcheck:
test: ["CMD", "mc", "ls", "local", "|", "grep", "warehouse-rest"]
interval: 2s
timeout: 10s
retries: 3
start_period: 15s
volumes:
- minio_data:/bitnami/minio/data | {"source_file": "lakekeeper_catalog.md"} | [
-0.029448730871081352,
-0.03421558439731598,
-0.03281082957983017,
-0.022104954347014427,
0.10982145369052887,
-0.05567997694015503,
-0.020572278648614883,
-0.03384242206811905,
-0.022641820833086967,
0.01836789771914482,
0.034804169088602066,
-0.023505201563239098,
0.006104311440140009,
-... |
ab8c414f-f2a6-449d-9b42-6c53058d9a0f | clickhouse:
image: clickhouse/clickhouse-server:head
container_name: lakekeeper-clickhouse
user: '0:0' # Ensures root permissions
ports:
- "8123:8123"
- "9000:9000"
volumes:
- clickhouse_data:/var/lib/clickhouse
- ./clickhouse/data_import:/var/lib/clickhouse/data_import # Mount dataset folder
networks:
- iceberg_net
environment:
- CLICKHOUSE_DB=default
- CLICKHOUSE_USER=default
- CLICKHOUSE_DO_NOT_CHOWN=1
- CLICKHOUSE_PASSWORD=
depends_on:
lakekeeper:
condition: service_healthy
minio:
condition: service_healthy
volumes:
postgres_data:
minio_data:
clickhouse_data:
networks:
iceberg_net:
driver: bridge
```
Step 2:
Run the following command to start the services:
bash
docker compose up -d
Step 3:
Wait for all services to be ready. You can check the logs:
bash
docker-compose logs -f
:::note
The Lakekeeper setup requires that sample data be loaded into the Iceberg tables first. Make sure the environment has created and populated the tables before attempting to query them through ClickHouse. The availability of tables depends on the specific docker-compose setup and sample data loading scripts.
:::
Connecting to Local Lakekeeper Catalog {#connecting-to-local-lakekeeper-catalog}
Connect to your ClickHouse container:
bash
docker exec -it lakekeeper-clickhouse clickhouse-client
Then create the database connection to the Lakekeeper catalog:
```sql
SET allow_experimental_database_iceberg = 1;
CREATE DATABASE demo
ENGINE = DataLakeCatalog('http://lakekeeper:8181/catalog', 'minio', 'ClickHouse_Minio_P@ssw0rd')
SETTINGS catalog_type = 'rest', storage_endpoint = 'http://minio:9002/warehouse-rest', warehouse = 'demo'
```
Querying Lakekeeper catalog tables using ClickHouse {#querying-lakekeeper-catalog-tables-using-clickhouse}
Now that the connection is in place, you can start querying via the Lakekeeper catalog. For example:
```sql
USE demo;
SHOW TABLES;
```
If your setup includes sample data (such as the taxi dataset), you should see tables like:
sql title="Response"
ββnameβββββββββββ
β default.taxis β
βββββββββββββββββ
:::note
If you don't see any tables, this usually means:
1. The environment hasn't created the sample tables yet
2. The Lakekeeper catalog service isn't fully initialized
3. The sample data loading process hasn't completed
You can check the Spark logs to see the table creation progress:
bash
docker-compose logs spark
:::
To query a table (if available):
sql
SELECT count(*) FROM `default.taxis`;
sql title="Response"
ββcount()ββ
β 2171187 β
βββββββββββ
:::note Backticks required
Backticks are required because ClickHouse doesn't support more than one namespace.
:::
To inspect the table DDL:
sql
SHOW CREATE TABLE `default.taxis`; | {"source_file": "lakekeeper_catalog.md"} | [
0.015085541643202305,
-0.04277558997273445,
-0.01819947361946106,
-0.0010710207279771566,
0.01985316351056099,
-0.08539249747991562,
0.014241897501051426,
0.004835129715502262,
-0.031415365636348724,
0.009100295603275299,
0.0018587803933769464,
-0.024334661662578583,
0.002257907297462225,
... |
4223cf31-b69b-4b04-9f49-216a86e16c42 | :::note Backticks required
Backticks are required because ClickHouse doesn't support more than one namespace.
:::
To inspect the table DDL:
sql
SHOW CREATE TABLE `default.taxis`;
sql title="Response"
ββstatementββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE demo.`default.taxis` β
β ( β
β `VendorID` Nullable(Int64), β
β `tpep_pickup_datetime` Nullable(DateTime64(6)), β
β `tpep_dropoff_datetime` Nullable(DateTime64(6)), β
β `passenger_count` Nullable(Float64), β
β `trip_distance` Nullable(Float64), β
β `RatecodeID` Nullable(Float64), β
β `store_and_fwd_flag` Nullable(String), β
β `PULocationID` Nullable(Int64), β
β `DOLocationID` Nullable(Int64), β
β `payment_type` Nullable(Int64), β
β `fare_amount` Nullable(Float64), β
β `extra` Nullable(Float64), β
β `mta_tax` Nullable(Float64), β
β `tip_amount` Nullable(Float64), β
β `tolls_amount` Nullable(Float64), β
β `improvement_surcharge` Nullable(Float64), β
β `total_amount` Nullable(Float64), β
β `congestion_surcharge` Nullable(Float64), β
β `airport_fee` Nullable(Float64) β
β ) β
β ENGINE = Iceberg('http://minio:9002/warehouse-rest/warehouse/default/taxis/', 'minio', '[HIDDEN]') β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Loading data from your Data Lake into ClickHouse {#loading-data-from-your-data-lake-into-clickhouse}
If you need to load data from the Lakekeeper catalog into ClickHouse, start by creating a local ClickHouse table: | {"source_file": "lakekeeper_catalog.md"} | [
0.012929537333548069,
-0.07699647545814514,
-0.032761652022600174,
0.07289308309555054,
-0.07348871231079102,
0.04610983654856682,
0.04548081010580063,
-0.0176620464771986,
-0.04246284440159798,
0.04442403092980385,
0.09020424634218216,
-0.07915861904621124,
0.01745803840458393,
-0.0312821... |
d140235b-1580-45bd-b8bd-0110d48b7d73 | If you need to load data from the Lakekeeper catalog into ClickHouse, start by creating a local ClickHouse table:
sql
CREATE TABLE taxis
(
`VendorID` Int64,
`tpep_pickup_datetime` DateTime64(6),
`tpep_dropoff_datetime` DateTime64(6),
`passenger_count` Float64,
`trip_distance` Float64,
`RatecodeID` Float64,
`store_and_fwd_flag` String,
`PULocationID` Int64,
`DOLocationID` Int64,
`payment_type` Int64,
`fare_amount` Float64,
`extra` Float64,
`mta_tax` Float64,
`tip_amount` Float64,
`tolls_amount` Float64,
`improvement_surcharge` Float64,
`total_amount` Float64,
`congestion_surcharge` Float64,
`airport_fee` Float64
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(tpep_pickup_datetime)
ORDER BY (VendorID, tpep_pickup_datetime, PULocationID, DOLocationID);
Then load the data from your Lakekeeper catalog table via an
INSERT INTO SELECT
:
sql
INSERT INTO taxis
SELECT * FROM demo.`default.taxis`; | {"source_file": "lakekeeper_catalog.md"} | [
0.04842790216207504,
0.0041345288045704365,
0.026828626170754433,
-0.013698738999664783,
-0.05745283514261246,
-0.017778821289539337,
0.022710692137479782,
0.062225256115198135,
-0.09469111263751984,
0.039124369621276855,
0.07247357070446014,
-0.1014794185757637,
0.005161311477422714,
-0.0... |
90379ae9-c690-4231-a110-50049ced8a81 | slug: /use-cases/data-lake/glue-catalog
sidebar_label: 'AWS Glue catalog'
title: 'AWS Glue catalog'
pagination_prev: null
pagination_next: null
description: 'In this guide, we will walk you through the steps to query
your data in S3 buckets using ClickHouse and the AWS Glue Data Catalog.'
keywords: ['Glue', 'Data Lake']
show_related_blogs: true
doc_type: 'guide'
import BetaBadge from '@theme/badges/BetaBadge';
ClickHouse supports integration with multiple catalogs (Unity, Glue, Polaris,
etc.). In this guide, we will walk you through the steps to query your data in
S3 buckets using ClickHouse and the Glue Data Catalog.
:::note
Glue supports many different table formats, but this integration only supports
Iceberg tables.
:::
Configuring Glue in AWS {#configuring}
To connect to the glue catalog, you will need to identify the region of your
catalog and provide an access and secret key.
:::note
Currently, the Glue catalog only supports access and secret keys, but we will
support additional authentication approaches in the future.
:::
Creating a connection between Glue data catalog and ClickHouse {#connecting}
With your Unity Catalog configured and authentication in place, establish a
connection between ClickHouse and Unity Catalog.
sql title="Query"
CREATE DATABASE glue
ENGINE = DataLakeCatalog
SETTINGS
catalog_type = 'glue',
region = 'us-west-2',
aws_access_key_id = '<access-key>',
aws_secret_access_key = '<secret-key>'
Query the Glue data catalog using ClickHouse {#query-glue-catalog}
Now that the connection is in place, you can start querying Glue:
sql title="Query"
USE glue;
SHOW TABLES;
sql title="Response"
ββnameββββββββββββββββββββββββββββββββββββ
1. β iceberg-benchmark.hitsiceberg β
2. β iceberg-benchmark.hitsparquet β
3. β iceberg_benchmark.hitsdailypartitioned β
4. β iceberg_benchmark.time_travel β
ββββββββββββββββββββββββββββββββββββββββββ
You can see above that some tables above are not Iceberg tables, for instance
iceberg-benchmark.hitsparquet
. You won't be able to query these as only Iceberg
is currently supported.
To query a table:
sql title="Query"
SELECT count(*) FROM `iceberg-benchmark.hitsiceberg`;
:::note
Backticks are required because ClickHouse doesn't support more than one namespace.
:::
To inspect the table DDL, run the following query:
sql
SHOW CREATE TABLE `iceberg-benchmark.hitsiceberg`; | {"source_file": "glue_catalog.md"} | [
-0.03573083505034447,
-0.05492328479886055,
-0.03172113001346588,
0.006087954621762037,
0.015664858743548393,
0.06349944323301315,
-0.004462813958525658,
-0.027052124962210655,
-0.027912190183997154,
-0.017779607325792313,
0.0456136018037796,
0.012829676270484924,
0.10015957802534103,
-0.0... |
41ea01f7-311d-4f76-8444-f8ed80033f52 | sql title="Response"
ββstatementββββββββββββββββββββββββββββββββββββββββββββββββ
1.β CREATE TABLE glue.`iceberg-benchmark.hitsiceberg` β
β ( β
β `watchid` Nullable(Int64), β
β `javaenable` Nullable(Int32), β
β `title` Nullable(String), β
β `goodevent` Nullable(Int32), β
β `eventtime` Nullable(DateTime64(6)), β
β `eventdate` Nullable(Date), β
β `counterid` Nullable(Int32), β
β `clientip` Nullable(Int32), β
β `regionid` Nullable(Int32), β
β `userid` Nullable(Int64), β
β `counterclass` Nullable(Int32), β
β `os` Nullable(Int32), β
β `useragent` Nullable(Int32), β
β `url` Nullable(String), β
β `referer` Nullable(String), β
β `isrefresh` Nullable(Int32), β
β `referercategoryid` Nullable(Int32), β
β `refererregionid` Nullable(Int32), β
β `urlcategoryid` Nullable(Int32), β
β `urlregionid` Nullable(Int32), β
β `resolutionwidth` Nullable(Int32), β
β `resolutionheight` Nullable(Int32), β
β `resolutiondepth` Nullable(Int32), β
β `flashmajor` Nullable(Int32), β
β `flashminor` Nullable(Int32), β
β `flashminor2` Nullable(String), β
β `netmajor` Nullable(Int32), β
β `netminor` Nullable(Int32), β
β `useragentmajor` Nullable(Int32), β
β `useragentminor` Nullable(String), β
β `cookieenable` Nullable(Int32), β
β `javascriptenable` Nullable(Int32), β
β `ismobile` Nullable(Int32), β
β `mobilephone` Nullable(Int32), β
β `mobilephonemodel` Nullable(String), β
β `params` Nullable(String), β
β `ipnetworkid` Nullable(Int32), β
β `traficsourceid` Nullable(Int32), β
β `searchengineid` Nullable(Int32), β
β `searchphrase` Nullable(String), β
β `advengineid` Nullable(Int32), β
β `isartifical` Nullable(Int32), β
β `windowclientwidth` Nullable(Int32), β
β `windowclientheight` Nullable(Int32), β
β `clienttimezone` Nullable(Int32), β | {"source_file": "glue_catalog.md"} | [
-0.01664247363805771,
-0.051761433482170105,
-0.03289896994829178,
0.07023666054010391,
-0.06704369187355042,
-0.008616253733634949,
0.022169284522533417,
0.01714601367712021,
-0.05941448733210564,
0.04462091252207756,
-0.004408309701830149,
-0.05549655109643936,
0.05462374910712242,
-0.02... |
e782873d-368e-4a62-83ef-c82c559e470f | β `windowclientwidth` Nullable(Int32), β
β `windowclientheight` Nullable(Int32), β
β `clienttimezone` Nullable(Int32), β
β `clienteventtime` Nullable(DateTime64(6)), β
β `silverlightversion1` Nullable(Int32), β
β `silverlightversion2` Nullable(Int32), β
β `silverlightversion3` Nullable(Int32), β
β `silverlightversion4` Nullable(Int32), β
β `pagecharset` Nullable(String), β
β `codeversion` Nullable(Int32), β
β `islink` Nullable(Int32), β
β `isdownload` Nullable(Int32), β
β `isnotbounce` Nullable(Int32), β
β `funiqid` Nullable(Int64), β
β `originalurl` Nullable(String), β
β `hid` Nullable(Int32), β
β `isoldcounter` Nullable(Int32), β
β `isevent` Nullable(Int32), β
β `isparameter` Nullable(Int32), β
β `dontcounthits` Nullable(Int32), β
β `withhash` Nullable(Int32), β
β `hitcolor` Nullable(String), β
β `localeventtime` Nullable(DateTime64(6)), β
β `age` Nullable(Int32), β
β `sex` Nullable(Int32), β
β `income` Nullable(Int32), β
β `interests` Nullable(Int32), β
β `robotness` Nullable(Int32), β
β `remoteip` Nullable(Int32), β
β `windowname` Nullable(Int32), β
β `openername` Nullable(Int32), β
β `historylength` Nullable(Int32), β
β `browserlanguage` Nullable(String), β
β `browsercountry` Nullable(String), β
β `socialnetwork` Nullable(String), β
β `socialaction` Nullable(String), β
β `httperror` Nullable(Int32), β
β `sendtiming` Nullable(Int32), β
β `dnstiming` Nullable(Int32), β
β `connecttiming` Nullable(Int32), β
β `responsestarttiming` Nullable(Int32), β
β `responseendtiming` Nullable(Int32), β
β `fetchtiming` Nullable(Int32), β
β `socialsourcenetworkid` Nullable(Int32), β
β `socialsourcepage` Nullable(String), β
β `paramprice` Nullable(Int32), β
β `paramorderid` Nullable(String), β
β `paramcurrency` Nullable(String), β | {"source_file": "glue_catalog.md"} | [
-0.059018414467573166,
-0.04073530435562134,
-0.0498325414955616,
0.025750618427991867,
0.01707443967461586,
-0.02410135045647621,
0.0003643874078989029,
0.01981423981487751,
-0.08993460983037949,
0.003441904904320836,
0.027307765558362007,
-0.037646111100912094,
0.024719350039958954,
-0.0... |
c13ab583-cee8-44fa-9433-b5e4e968124d | β `paramprice` Nullable(Int32), β
β `paramorderid` Nullable(String), β
β `paramcurrency` Nullable(String), β
β `paramcurrencyid` Nullable(Int32), β
β `openstatservicename` Nullable(String), β
β `openstatcampaignid` Nullable(String), β
β `openstatadid` Nullable(String), β
β `openstatsourceid` Nullable(String), β
β `utmsource` Nullable(String), β
β `utmmedium` Nullable(String), β
β `utmcampaign` Nullable(String), β
β `utmcontent` Nullable(String), β
β `utmterm` Nullable(String), β
β `fromtag` Nullable(String), β
β `hasgclid` Nullable(Int32), β
β `refererhash` Nullable(Int64), β
β `urlhash` Nullable(Int64), β
β `clid` Nullable(Int32) β
β ) β
βENGINE = Iceberg('s3://<s3-path>') β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "glue_catalog.md"} | [
-0.05144518241286278,
-0.015177526511251926,
-0.11052681505680084,
-0.017665846273303032,
-0.049945589154958725,
-0.02885112538933754,
-0.013042602688074112,
-0.012629046104848385,
-0.08087251335382462,
0.035332538187503815,
0.06026100739836693,
-0.029514653608202934,
0.09920217841863632,
... |
952de2d0-9058-4c8f-8a42-1dd121269f2b | Loading data from your Data Lake into ClickHouse {#loading-data-into-clickhouse}
If you need to load data from Databricks into ClickHouse, start by creating a
local ClickHouse table: | {"source_file": "glue_catalog.md"} | [
0.027276575565338135,
-0.042578112334012985,
-0.006739123724400997,
-0.0184137262403965,
-0.02076299488544464,
-0.07125741243362427,
-0.0023701984900981188,
0.003581037512049079,
-0.08697876334190369,
-0.02670340985059738,
0.013542305678129196,
-0.02823706716299057,
0.009371538646519184,
-... |
f5afb3c4-62dd-400e-bdd8-2c68fb3a6eff | sql title="Query"
CREATE TABLE hits
(
`WatchID` BIGINT NOT NULL,
`JavaEnable` SMALLINT NOT NULL,
`Title` TEXT NOT NULL,
`GoodEvent` SMALLINT NOT NULL,
`EventTime` TIMESTAMP NOT NULL,
`EventDate` Date NOT NULL,
`CounterID` INTEGER NOT NULL,
`ClientIP` INTEGER NOT NULL,
`RegionID` INTEGER NOT NULL,
`UserID` BIGINT NOT NULL,
`CounterClass` SMALLINT NOT NULL,
`OS` SMALLINT NOT NULL,
`UserAgent` SMALLINT NOT NULL,
`URL` TEXT NOT NULL,
`Referer` TEXT NOT NULL,
`IsRefresh` SMALLINT NOT NULL,
`RefererCategoryID` SMALLINT NOT NULL,
`RefererRegionID` INTEGER NOT NULL,
`URLCategoryID` SMALLINT NOT NULL,
`URLRegionID` INTEGER NOT NULL,
`ResolutionWidth` SMALLINT NOT NULL,
`ResolutionHeight` SMALLINT NOT NULL,
`ResolutionDepth` SMALLINT NOT NULL,
`FlashMajor` SMALLINT NOT NULL,
`FlashMinor` SMALLINT NOT NULL,
`FlashMinor2` TEXT NOT NULL,
`NetMajor` SMALLINT NOT NULL,
`NetMinor` SMALLINT NOT NULL,
`UserAgentMajor` SMALLINT NOT NULL,
`UserAgentMinor` VARCHAR(255) NOT NULL,
`CookieEnable` SMALLINT NOT NULL,
`JavascriptEnable` SMALLINT NOT NULL,
`IsMobile` SMALLINT NOT NULL,
`MobilePhone` SMALLINT NOT NULL,
`MobilePhoneModel` TEXT NOT NULL,
`Params` TEXT NOT NULL,
`IPNetworkID` INTEGER NOT NULL,
`TraficSourceID` SMALLINT NOT NULL,
`SearchEngineID` SMALLINT NOT NULL,
`SearchPhrase` TEXT NOT NULL,
`AdvEngineID` SMALLINT NOT NULL,
`IsArtifical` SMALLINT NOT NULL,
`WindowClientWidth` SMALLINT NOT NULL,
`WindowClientHeight` SMALLINT NOT NULL,
`ClientTimeZone` SMALLINT NOT NULL,
`ClientEventTime` TIMESTAMP NOT NULL,
`SilverlightVersion1` SMALLINT NOT NULL,
`SilverlightVersion2` SMALLINT NOT NULL,
`SilverlightVersion3` INTEGER NOT NULL,
`SilverlightVersion4` SMALLINT NOT NULL,
`PageCharset` TEXT NOT NULL,
`CodeVersion` INTEGER NOT NULL,
`IsLink` SMALLINT NOT NULL,
`IsDownload` SMALLINT NOT NULL,
`IsNotBounce` SMALLINT NOT NULL,
`FUniqID` BIGINT NOT NULL,
`OriginalURL` TEXT NOT NULL,
`HID` INTEGER NOT NULL,
`IsOldCounter` SMALLINT NOT NULL,
`IsEvent` SMALLINT NOT NULL,
`IsParameter` SMALLINT NOT NULL,
`DontCountHits` SMALLINT NOT NULL,
`WithHash` SMALLINT NOT NULL,
`HitColor` CHAR NOT NULL,
`LocalEventTime` TIMESTAMP NOT NULL,
`Age` SMALLINT NOT NULL,
`Sex` SMALLINT NOT NULL,
`Income` SMALLINT NOT NULL,
`Interests` SMALLINT NOT NULL,
`Robotness` SMALLINT NOT NULL,
`RemoteIP` INTEGER NOT NULL,
`WindowName` INTEGER NOT NULL,
`OpenerName` INTEGER NOT NULL,
`HistoryLength` SMALLINT NOT NULL,
`BrowserLanguage` TEXT NOT NULL,
`BrowserCountry` TEXT NOT NULL,
`SocialNetwork` TEXT NOT NULL,
`SocialAction` TEXT NOT NULL,
`HTTPError` SMALLINT NOT NULL,
`SendTiming` INTEGER NOT NULL,
`DNSTiming` INTEGER NOT NULL,
`ConnectTiming` INTEGER NOT NULL, | {"source_file": "glue_catalog.md"} | [
0.04598699137568474,
0.006550278980284929,
-0.07167363166809082,
0.05816339701414108,
-0.06030077859759331,
0.01941422000527382,
0.05230845510959625,
-0.005195398349314928,
-0.01855943351984024,
0.05538256838917732,
0.049358904361724854,
-0.05827464535832405,
0.08997131139039993,
-0.086265... |
e96765ab-57ba-4931-b079-18ae50ee6cef | `SocialAction` TEXT NOT NULL,
`HTTPError` SMALLINT NOT NULL,
`SendTiming` INTEGER NOT NULL,
`DNSTiming` INTEGER NOT NULL,
`ConnectTiming` INTEGER NOT NULL,
`ResponseStartTiming` INTEGER NOT NULL,
`ResponseEndTiming` INTEGER NOT NULL,
`FetchTiming` INTEGER NOT NULL,
`SocialSourceNetworkID` SMALLINT NOT NULL,
`SocialSourcePage` TEXT NOT NULL,
`ParamPrice` BIGINT NOT NULL,
`ParamOrderID` TEXT NOT NULL,
`ParamCurrency` TEXT NOT NULL,
`ParamCurrencyID` SMALLINT NOT NULL,
`OpenstatServiceName` TEXT NOT NULL,
`OpenstatCampaignID` TEXT NOT NULL,
`OpenstatAdID` TEXT NOT NULL,
`OpenstatSourceID` TEXT NOT NULL,
`UTMSource` TEXT NOT NULL,
`UTMMedium` TEXT NOT NULL,
`UTMCampaign` TEXT NOT NULL,
`UTMContent` TEXT NOT NULL,
`UTMTerm` TEXT NOT NULL,
`FromTag` TEXT NOT NULL,
`HasGCLID` SMALLINT NOT NULL,
`RefererHash` BIGINT NOT NULL,
`URLHash` BIGINT NOT NULL,
`CLID` INTEGER NOT NULL
)
PRIMARY KEY (CounterID, EventDate, UserID, EventTime, WatchID); | {"source_file": "glue_catalog.md"} | [
-0.015517369844019413,
0.04694758355617523,
-0.05799047276377678,
0.012799271382391453,
-0.057354819029569626,
-0.01951620727777481,
-0.0018745927372947335,
-0.017536098137497902,
-0.016426512971520424,
0.03294381871819496,
0.05623090639710426,
0.00900551863014698,
0.12544089555740356,
-0.... |
90063bf9-a84c-438f-bf65-54213e04637a | Then load the data from your Iceberg table:
sql title="Query"
INSERT INTO default.hits
SELECT * FROM glue.`iceberg-benchmark.hitsiceberg`; | {"source_file": "glue_catalog.md"} | [
0.005638807080686092,
-0.06669886410236359,
-0.05216499790549278,
0.11080341786146164,
-0.10662741214036942,
0.013593642972409725,
-0.029667222872376442,
0.0961337685585022,
-0.10057427734136581,
-0.007337307557463646,
0.006227412261068821,
-0.019848451018333435,
0.07564685493707657,
-0.10... |
db343135-c557-4496-ade9-d52bf15ee070 | slug: /use-cases/AI/jupyter-notebook
sidebar_label: 'Exploring data with Jupyter notebooks and chDB'
title: 'Exploring data in Jupyter notebooks with chDB'
description: 'This guide explains how to setup and use chDB to explore data from ClickHouse Cloud or local files in Jupyer notebooks'
keywords: ['ML', 'Jupyer', 'chDB', 'pandas']
doc_type: 'guide'
import Image from '@theme/IdealImage';
import image_1 from '@site/static/images/use-cases/AI_ML/jupyter/1.png';
import image_2 from '@site/static/images/use-cases/AI_ML/jupyter/2.png';
import image_3 from '@site/static/images/use-cases/AI_ML/jupyter/3.png';
import image_4 from '@site/static/images/use-cases/AI_ML/jupyter/4.png';
import image_5 from '@site/static/images/use-cases/AI_ML/jupyter/5.png';
import image_6 from '@site/static/images/use-cases/AI_ML/jupyter/6.png';
import image_7 from '@site/static/images/use-cases/AI_ML/jupyter/7.png';
import image_8 from '@site/static/images/use-cases/AI_ML/jupyter/8.png';
import image_9 from '@site/static/images/use-cases/AI_ML/jupyter/9.png';
Exploring data with Jupyter notebooks and chDB
In this guide, you will learn how you can explore a dataset on ClickHouse Cloud data in Jupyter notebook with the help of
chDB
- a fast in-process SQL OLAP Engine powered by ClickHouse.
Prerequisites
:
- a virtual environment
- a working ClickHouse Cloud service and your
connection details
:::tip
If you don't yet have a ClickHouse Cloud account, you can
sign up
for
a trial and get $300 in free-credits to begin.
:::
What you'll learn:
- Connect to ClickHouse Cloud from Jupyter notebooks using chDB
- Query remote datasets and convert results to Pandas DataFrames
- Combine cloud data with local CSV files for analysis
- Visualize data using matplotlib
We'll be using the UK Property Price dataset which is available on ClickHouse Cloud as one of the starter datasets.
It contains data about the prices that houses were sold for in the United Kingdom from 1995 to 2024.
Setup {#setup}
To add this dataset to an existing ClickHouse Cloud service, login to
console.clickhouse.cloud
with your account details.
In the left hand menu, click on
Data sources
. Then click
Predefined sample data
:
Select
Get started
in the UK property price paid data (4GB) card:
Then click
Import dataset
:
ClickHouse will automatically create the
pp_complete
table in the
default
database and fill the table with 28.92 million rows of price point data.
In order to reduce the likelihood of exposing your credentials, we recommend to add your Cloud username and password as environment variables on your local machine.
From a terminal run the following command to add your username and password as environment variables:
bash
export CLICKHOUSE_USER=default
export CLICKHOUSE_PASSWORD=your_actual_password
:::note
The environment variables above persist only as long as your terminal session.
To set them permanently, add them to your shell configuration file.
::: | {"source_file": "jupyter-notebook.md"} | [
0.05201060697436333,
0.028564365580677986,
-0.0017921481048688293,
-0.022050127387046814,
0.047857169061899185,
0.005217901896685362,
-0.006251544691622257,
0.03974322974681854,
-0.027582891285419464,
0.0037520972546190023,
0.09030506014823914,
-0.04767189547419548,
0.12757566571235657,
-0... |
1d947569-910c-4d94-bf15-e114308e41f1 | :::note
The environment variables above persist only as long as your terminal session.
To set them permanently, add them to your shell configuration file.
:::
Now activate your virtual environment.
From within your virtual environment, install Jupyter Notebook with the following command:
python
pip install notebook
launch Jupyter Notebook with the following command:
python
jupyter notebook
A new browser window should open with the Jupyter interface on
localhost:8888
.
Click
File
>
New
>
Notebook
to create a new Notebook.
You will be prompted to select a kernel.
Select any Python kernel available to you, in this example we will select the
ipykernel
:
In a blank cell, you can type the following command to install chDB which we will be using connect to our remote ClickHouse Cloud instance:
python
pip install chdb
You can now import chDB and run a simple query to check that everything is set up correctly:
```python
import chdb
result = chdb.query("SELECT 'Hello, ClickHouse!' as message")
print(result)
```
Exploring the data {#exploring-the-data}
With the UK price paid data set up and chDB up and running in a Jupyter notebook, we can now get started exploring our data.
Let's imagine we are interested in checking how price has changed with time for a specific area in the UK such as the capital city, London.
ClickHouse's
remoteSecure
function allows you to easily retrieve the data from ClickHouse Cloud.
You can instruct chDB to return this data in process as a Pandas data frame - which is a convenient and familiar way of working with data.
Write the following query to fetch the UK price paid data from your ClickHouse Cloud service and turn it into a
pandas.DataFrame
:
```python
import os
from dotenv import load_dotenv
import chdb
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
Load environment variables from .env file
load_dotenv()
username = os.environ.get('CLICKHOUSE_USER')
password = os.environ.get('CLICKHOUSE_PASSWORD')
query = f"""
SELECT
toYear(date) AS year,
avg(price) AS avg_price
FROM remoteSecure(
'****.europe-west4.gcp.clickhouse.cloud',
default.pp_complete,
'{username}',
'{password}'
)
WHERE town = 'LONDON'
GROUP BY toYear(date)
ORDER BY year;
"""
df = chdb.query(query, "DataFrame")
df.head()
```
In the snippet above,
chdb.query(query, "DataFrame")
runs the specified query and outputs the result to the terminal as a Pandas DataFrame.
In the query we are using the
remoteSecure
function to connect to ClickHouse Cloud.
The
remoteSecure
functions takes as parameters:
- a connection string
- the name of the database and table to use
- your username
- your password
As a security best practice, you should prefer using environment variables for the username and password parameters rather than specifying them directly in the function, although this is possible if you wish. | {"source_file": "jupyter-notebook.md"} | [
0.05305223539471626,
-0.007883304730057716,
0.03137373551726341,
-0.03456754982471466,
-0.020400134846568108,
-0.0034050748217850924,
-0.04226463660597801,
-0.003968190401792526,
0.010576046071946621,
0.013892436400055885,
0.003839259734377265,
-0.06218571215867996,
0.03533802926540375,
-0... |
0ae828f3-4a40-48f8-bb37-be99a9aa7bb2 | The
remoteSecure
function connects to the remote ClickHouse Cloud service, runs the query and returns the result.
Depending on the size of your data, this could take a few seconds.
In this case we return an average price point per year, and filter by
town='LONDON'
.
The result is then stored as a DataFrame in a variable called
df
.
df.head
displays only the first few rows of the returned data:
Run the following command in a new cell to check the types of the columns:
python
df.dtypes
response
year uint16
avg_price float64
dtype: object
Notice that while
date
is of type
Date
in ClickHouse, in the resulting data frame it is of type
uint16
.
chDB automatically infers the most appropriate type when returning the DataFrame.
With the data now available to us in a familiar form, let's explore how prices of property in London have changed with time.
In a new cell, run the following command to build a simple chart of time vs price for London using matplotlib:
```python
plt.figure(figsize=(12, 6))
plt.plot(df['year'], df['avg_price'], marker='o')
plt.xlabel('Year')
plt.ylabel('Price (Β£)')
plt.title('Price of London property over time')
Show every 2nd year to avoid crowding
years_to_show = df['year'][::2] # Every 2nd year
plt.xticks(years_to_show, rotation=45)
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
```
Perhaps unsurprisingly, property prices in London have increased substantially over time.
A fellow data scientist has sent us a .csv file with additional housing related variables and is curious how
the number of houses sold in London has changed over time.
Let's plot some of these against the housing prices and see if we can discover any correlation.
You can use the
file
table engine to read files directly on your local machine.
In a new cell, run the following command to make a new DataFrame from the local .csv file.
```python
query = f"""
SELECT
toYear(date) AS year,
sum(houses_sold)*1000
FROM file('/Users/datasci/Desktop/housing_in_london_monthly_variables.csv')
WHERE area = 'city of london' AND houses_sold IS NOT NULL
GROUP BY toYear(date)
ORDER BY year;
"""
df_2 = chdb.query(query, "DataFrame")
df_2.head()
```
Read from multiple sources in a single step
It's also possible to read from multiple sources in a single step. You could use the query below using a `JOIN` to do so:
```python
query = f"""
SELECT
toYear(date) AS year,
avg(price) AS avg_price, housesSold
FROM remoteSecure(
'****.europe-west4.gcp.clickhouse.cloud',
default.pp_complete,
'{username}',
'{password}'
) AS remote
JOIN (
SELECT
toYear(date) AS year,
sum(houses_sold)*1000 AS housesSold
FROM file('/Users/datasci/Desktop/housing_in_london_monthly_variables.csv')
WHERE area = 'city of london' AND houses_sold IS NOT NULL
GROUP BY toYear(date)
ORDER BY year
) AS local ON local.year = remote.year
WHERE town = 'LONDON'
GROUP BY toYear(date)
ORDER BY year;
"""
``` | {"source_file": "jupyter-notebook.md"} | [
0.08231665194034576,
-0.012441790662705898,
-0.017880816012620926,
0.02673378959298134,
0.0030743677634745836,
-0.03812050819396973,
-0.007868747226893902,
-0.022907942533493042,
0.026805812492966652,
0.025422843173146248,
0.03632766380906105,
-0.07297144830226898,
-0.016703613102436066,
-... |
d7120226-727a-4406-bec3-a8497c514ff8 | Although we are missing data from 2020 onwards, we can plot the two datasets against each other for the years 1995 to 2019.
In a new cell run the following command:
```python
Create a figure with two y-axes
fig, ax1 = plt.subplots(figsize=(14, 8))
Plot houses sold on the left y-axis
color = 'tab:blue'
ax1.set_xlabel('Year')
ax1.set_ylabel('Houses Sold', color=color)
ax1.plot(df_2['year'], df_2['houses_sold'], marker='o', color=color, label='Houses Sold', linewidth=2)
ax1.tick_params(axis='y', labelcolor=color)
ax1.grid(True, alpha=0.3)
Create a second y-axis for price data
ax2 = ax1.twinx()
color = 'tab:red'
ax2.set_ylabel('Average Price (Β£)', color=color)
Plot price data up until 2019
ax2.plot(df[df['year'] <= 2019]['year'], df[df['year'] <= 2019]['avg_price'], marker='s', color=color, label='Average Price', linewidth=2)
ax2.tick_params(axis='y', labelcolor=color)
Format price axis with currency formatting
ax2.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'Β£{x:,.0f}'))
Set title and show every 2nd year
plt.title('London Housing Market: Sales Volume vs Prices Over Time', fontsize=14, pad=20)
Use years only up to 2019 for both datasets
all_years = sorted(list(set(df_2[df_2['year'] <= 2019]['year']).union(set(df[df['year'] <= 2019]['year']))))
years_to_show = all_years[::2] # Every 2nd year
ax1.set_xticks(years_to_show)
ax1.set_xticklabels(years_to_show, rotation=45)
Add legends
ax1.legend(loc='upper left')
ax2.legend(loc='upper right')
plt.tight_layout()
plt.show()
```
From the plotted data, we see that sales started around 160,000 in the year 1995 and surged quickly, peaking at around 540,000 in 1999.
After that, volumes declined sharply through the mid-2000s, dropping severely during the 2007-2008 financial crisis and falling to around 140,000.
Prices on the other hand showed steady, consistent growth from about Β£150,000 in 1995 to around Β£300,000 by 2005.
Growth accelerated significantly after 2012, rising steeply from roughly Β£400,000 to over Β£1,000,000 by 2019.
Unlike sales volume, prices showed minimal impact from the 2008 crisis and maintained an upward trajectory. Yikes!
Summary {#summary}
This guide demonstrated how chDB enables seamless data exploration in Jupyter notebooks by connecting ClickHouse Cloud with local data sources.
Using the UK Property Price dataset, we showed how to query remote ClickHouse Cloud data with the
remoteSecure()
function, read local CSV files with the
file()
table engine, and convert results directly to Pandas DataFrames for analysis and visualization.
Through chDB, data scientists can leverage ClickHouse's powerful SQL capabilities alongside familiar Python tools like Pandas and matplotlib, making it easy to combine multiple data sources for comprehensive analysis.
While many a London-based data scientist may not be able to afford their own home or apartment any time soon, at least they can analyze the market that priced them out! | {"source_file": "jupyter-notebook.md"} | [
0.01752748154103756,
-0.06106003746390343,
0.0554494671523571,
0.004027205053716898,
0.0461350753903389,
-0.03921537473797798,
-0.07233557850122452,
-0.0014813876478001475,
-0.04967353492975235,
-0.039857327938079834,
0.0874665305018425,
-0.040843624621629715,
-0.03359474614262581,
-0.0334... |
8c7af61f-8f6e-4946-89f3-28b5d02fbf94 | slug: /use-cases/AI/marimo-notebook
sidebar_label: 'Exploring data with Marimo notebooks and chDB'
title: 'Exploring data with Marimo notebooks and chDB'
description: 'This guide explains how to setup and use chDB to explore data from ClickHouse Cloud or local files in Marimo notebooks'
keywords: ['ML', 'Marimo', 'chDB', 'pandas']
doc_type: 'guide'
import Image from '@theme/IdealImage';
import image_1 from '@site/static/images/use-cases/AI_ML/jupyter/1.png';
import image_2 from '@site/static/images/use-cases/AI_ML/jupyter/2.png';
import image_3 from '@site/static/images/use-cases/AI_ML/jupyter/3.png';
import image_4 from '@site/static/images/use-cases/AI_ML/Marimo/4.png';
import image_5 from '@site/static/images/use-cases/AI_ML/Marimo/5.png';
import image_6 from '@site/static/images/use-cases/AI_ML/Marimo/6.png';
import image_7 from '@site/static/images/use-cases/AI_ML/Marimo/7.gif';
import image_8 from '@site/static/images/use-cases/AI_ML/Marimo/8.gif';
In this guide, you will learn how you can explore a dataset on ClickHouse Cloud data in Marimo notebook with the help of
chDB
- a fast in-process SQL OLAP Engine powered by ClickHouse.
Prerequisites:
- Python 3.8 or higher
- a virtual environment
- a working ClickHouse Cloud service and your
connection details
:::tip
If you don't yet have a ClickHouse Cloud account, you can
sign up
for
a trial and get $300 in free-credits to begin.
:::
What you'll learn:
- Connect to ClickHouse Cloud from Marimo notebooks using chDB
- Query remote datasets and convert results to Pandas DataFrames
- Visualize data using Plotly in Marimo
- Leverage Marimo's reactive execution model for interactive data exploration
We'll be using the UK Property Price dataset which is available on ClickHouse Cloud as one of the starter datasets.
It contains data about the prices that houses were sold for in the United Kingdom from 1995 to 2024.
Setup {#setup}
Loading the dataset {#loading-the-dataset}
To add this dataset to an existing ClickHouse Cloud service, login to
console.clickhouse.cloud
with your account details.
In the left hand menu, click on
Data sources
. Then click
Predefined sample data
:
Select
Get started
in the UK property price paid data (4GB) card:
Then click
Import dataset
:
ClickHouse will automatically create the
pp_complete
table in the
default
database and fill the table with 28.92 million rows of price point data.
In order to reduce the likelihood of exposing your credentials, we recommend you add your Cloud username and password as environment variables on your local machine.
From a terminal run the following command to add your username and password as environment variables:
Setting up credentials {#setting-up-credentials}
bash
export CLICKHOUSE_CLOUD_HOSTNAME=<HOSTNAME>
export CLICKHOUSE_CLOUD_USER=default
export CLICKHOUSE_CLOUD_PASSWORD=your_actual_password | {"source_file": "marimo-notebook.md"} | [
0.06774505972862244,
0.021965255960822105,
-0.0488240048289299,
-0.02032439224421978,
0.05106945335865021,
0.006730773486196995,
-0.0011048841988667846,
0.04676569998264313,
-0.042237117886543274,
-0.019528530538082123,
0.10693686455488205,
-0.04512249678373337,
0.10903656482696533,
-0.033... |
9edd847d-b643-4bc7-b54a-511905e52f25 | Setting up credentials {#setting-up-credentials}
bash
export CLICKHOUSE_CLOUD_HOSTNAME=<HOSTNAME>
export CLICKHOUSE_CLOUD_USER=default
export CLICKHOUSE_CLOUD_PASSWORD=your_actual_password
:::note
The environment variables above persist only as long as your terminal session.
To set them permanently, add them to your shell configuration file.
:::
Installing Marimo {#installing-marimo}
Now activate your virtual environment.
From within your virtual environment, install the following packages that we will be using in this guide:
python
pip install chdb pandas plotly marimo
Create a new Marimo notebook with the following command:
bash
marimo edit clickhouse_exploration.py
A new browser window should open with the Marimo interface on localhost:2718:
Marimo notebooks are stored as pure Python files, making them easy to version control and share with others.
Installing dependencies {#installing-dependencies}
In a new cell, import the required packages:
python
import marimo as mo
import chdb
import pandas as pd
import os
import plotly.express as px
import plotly.graph_objects as go
If you hover your mouse over the cell you will see two circles with the "+" symbol appear.
You can click these to add new cells.
Add a new cell and run a simple query to check that everything is set up correctly:
python
result = chdb.query("SELECT 'Hello ClickHouse from Marimo!'", "DataFrame")
result
You should see the result shown underneath the cell you just ran:
Exploring the data {#exploring-the-data}
With the UK price paid data set up and chDB up and running in a Marimo notebook, we can now get started exploring our data.
Let's imagine we are interested in checking how price has changed with time for a specific area in the UK such as the capital city, London.
ClickHouse's
remoteSecure
function allows you to easily retrieve the data from ClickHouse Cloud.
You can instruct chDB to return this data in process as a Pandas data frame - which is a convenient and familiar way of working with data.
Querying ClickHouse Cloud data {#querying-clickhouse-cloud-data}
Create a new cell with the following query to fetch the UK price paid data from your ClickHouse Cloud service and turn it into a
pandas.DataFrame
:
```python
query = f"""
SELECT
toYear(date) AS year,
round(avg(price)) AS price,
bar(price, 0, 1000000, 80)
FROM remoteSecure(
'{os.environ.get("CLICKHOUSE_CLOUD_HOSTNAME")}',
'default.pp_complete',
'{os.environ.get("CLICKHOUSE_CLOUD_USER")}',
'{os.environ.get("CLICKHOUSE_CLOUD_PASSWORD")}'
)
WHERE town = 'LONDON'
GROUP BY year
ORDER BY year
"""
df = chdb.query(query, "DataFrame")
df.head()
```
In the snippet above,
chdb.query(query, "DataFrame")
runs the specified query and outputs the result as a Pandas DataFrame.
In the query we are using the
remoteSecure
function to connect to ClickHouse Cloud. | {"source_file": "marimo-notebook.md"} | [
0.03192795813083649,
-0.060618940740823746,
-0.07456004619598389,
-0.04268364980816841,
-0.03859163075685501,
-0.015958962962031364,
-0.018295805901288986,
0.027921240776777267,
-0.01736619882285595,
0.049211177974939346,
0.011078122071921825,
-0.07250001281499863,
0.024041827768087387,
0.... |
9c167c07-02a5-4933-993e-dd7eb27da3c3 | In the query we are using the
remoteSecure
function to connect to ClickHouse Cloud.
The
remoteSecure
functions takes as parameters:
- a connection string
- the name of the database and table to use
- your username
- your password
As a security best practice, you should prefer using environment variables for the username and password parameters rather than specifying them directly in the function, although this is possible if you wish.
The
remoteSecure
function connects to the remote ClickHouse Cloud service, runs the query and returns the result.
Depending on the size of your data, this could take a few seconds.
In this case we return an average price point per year, and filter by
town='LONDON'
.
The result is then stored as a DataFrame in a variable called
df
.
Visualizing the data {#visualizing-the-data}
With the data now available to us in a familiar form, let's explore how prices of property in London have changed with time.
Marimo works particularly well with interactive plotting libraries like Plotly.
In a new cell, create an interactive chart:
```python
fig = px.line(
df,
x='year',
y='price',
title='Average Property Prices in London Over Time',
labels={'price': 'Average Price (Β£)', 'year': 'Year'}
)
fig.update_traces(mode='lines+markers')
fig.update_layout(hovermode='x unified')
fig
```
Perhaps unsurprisingly, property prices in London have increased substantially over time.
One of Marimo's strengths is its reactive execution model. Let's create an interactive widget to select different towns dynamically.
Interactive town selection {#interactive-town-selection}
In a new cell, create a dropdown to select different towns:
python
town_selector = mo.ui.dropdown(
options=['LONDON', 'MANCHESTER', 'BIRMINGHAM', 'LEEDS', 'LIVERPOOL'],
value='LONDON',
label='Select a town:'
)
town_selector
In another cell, create a query that reacts to the town selection. When you change the dropdown, this cell will automatically re-execute:
```python
query_reactive = f"""
SELECT
toYear(date) AS year,
round(avg(price)) AS price
FROM remoteSecure(
'{os.environ.get("CLICKHOUSE_CLOUD_HOSTNAME")}',
'default.pp_complete',
'{os.environ.get("CLICKHOUSE_CLOUD_USER")}',
'{os.environ.get("CLICKHOUSE_CLOUD_PASSWORD")}'
)
WHERE town = '{town_selector.value}'
GROUP BY year
ORDER BY year
"""
df_reactive = chdb.query(query_reactive, "DataFrame")
df_reactive
```
Now create a chart that updates automatically when you change the town.
You can move the chart above the dynamic dataframe so that it appears
below the cell with the dropdown.
```python
fig_reactive = px.line(
df_reactive,
x='year',
y='price',
title=f'Average Property Prices in {town_selector.value} Over Time',
labels={'price': 'Average Price (Β£)', 'year': 'Year'}
)
fig_reactive.update_traces(mode='lines+markers')
fig_reactive.update_layout(hovermode='x unified')
fig_reactive
``` | {"source_file": "marimo-notebook.md"} | [
0.07868468016386032,
-0.007513813674449921,
-0.027591433376073837,
0.023581156507134438,
-0.05076226592063904,
-0.05276193469762802,
0.046363674104213715,
-0.02147870697081089,
0.006437080446630716,
-0.0004654598596971482,
0.023484643548727036,
-0.04723748564720154,
0.04512364789843559,
-0... |
10ef7bc1-22a0-4ecc-abd4-6eda7c58590c | fig_reactive.update_traces(mode='lines+markers')
fig_reactive.update_layout(hovermode='x unified')
fig_reactive
```
Now when you select a town from the drop-down the chart will update dynamically:
Exploring price distributions with interactive box plots {#exploring-price-distributions}
Let's dive deeper into the data by examining the distribution of property prices in London for different years.
A box and whisker plot will show us the median, quartiles, and outliers, giving us a much better understanding than just the average price.
First, let's create a year slider that will let us interactively explore different years:
In a new cell, add the following:
python
year_slider = mo.ui.slider(
start=1995,
stop=2024,
value=2020,
step=1,
label='Select Year:',
show_value=True
)
year_slider
Now, let's query the individual property prices for the selected year.
Note that we're not aggregating here - we want all the individual transactions to build our distribution:
```python
query_distribution = f"""
SELECT
price,
toYear(date) AS year
FROM remoteSecure(
'{os.environ.get("CLICKHOUSE_CLOUD_HOSTNAME")}',
'default.pp_complete',
'{os.environ.get("CLICKHOUSE_CLOUD_USER")}',
'{os.environ.get("CLICKHOUSE_CLOUD_PASSWORD")}'
)
WHERE town = 'LONDON'
AND toYear(date) = {year_slider.value}
AND price > 0
AND price < 5000000
"""
df_distribution = chdb.query(query_distribution, "DataFrame")
create an interactive box plot.
fig_box = go.Figure()
fig_box.add_trace(
go.Box(
y=df_distribution['price'],
name=f'London {year_slider.value}',
boxmean='sd', # Show mean and standard deviation
marker_color='lightblue',
boxpoints='outliers' # Show outlier points
)
)
fig_box.update_layout(
title=f'Distribution of Property Prices in London ({year_slider.value})',
yaxis=dict(
title='Price (Β£)',
tickformat=',.0f'
),
showlegend=False,
height=600
)
fig_box
```
If you select the options button in the top right hand of the cell, you can hide
the code.
As you move the slider, the plot will automatically update thanks to Marimo's reactive execution:
Summary {#summary}
This guide demonstrated how you can use chDB to explore your data in ClickHouse Cloud using Marimo notebooks.
Using the UK Property Price dataset, we showed how to query remote ClickHouse Cloud data with the
remoteSecure()
function, and convert results directly to Pandas DataFrames for analysis and visualization.
Through chDB and Marimo's reactive execution model, data scientists can leverage ClickHouse's powerful SQL capabilities alongside familiar Python tools like Pandas and Plotly, with the added benefit of interactive widgets and automatic dependency tracking that make exploratory analysis more efficient and reproducible. | {"source_file": "marimo-notebook.md"} | [
-0.01948726363480091,
-0.023057904094457626,
0.08684424310922623,
0.0967002734541893,
-0.014949923381209373,
-0.046317875385284424,
-0.06548009067773819,
0.008068420924246311,
-0.054965537041425705,
-0.013712313026189804,
0.008928355760872364,
-0.09697196632623672,
0.0074699209071695805,
0... |
b8fc2171-d2a2-43d8-a335-da138f7d8fdb | slug: /use-cases/AI/MCP/ollama
sidebar_label: 'Integrate Ollama'
title: 'Set Up ClickHouse MCP Server with Ollama'
pagination_prev: null
pagination_next: null
description: 'This guide explains how to set up Ollama with a ClickHouse MCP server.'
keywords: ['AI', 'Ollama', 'MCP']
show_related_blogs: true
doc_type: 'guide'
import {CardHorizontal} from '@clickhouse/click-ui/bundled'
import Link from '@docusaurus/Link';
import Image from '@theme/IdealImage';
Using ClickHouse MCP server with Ollama
This guide explains how to use the ClickHouse MCP Server with Ollama.
Install Ollama {#install-ollama}
Ollama is a library for running Large Language Models (LLMs) on your own machine.
It has a
wide range of models available
and is easy to use.
You can download Ollama for Mac, Windows, or Linux from the
download page
.
Once you run Ollama, it will start a local server in the background that you can use to run models.
Alternatively, you can run the server manually by running
ollama serve
.
Once installed, you can pull a model down to your machine like this:
bash
ollama pull qwen3:8b
This will pull the model to your local machine if it is not present.
Once it's downloaded, you can run the model like this:
bash
ollama run qwen3:8b
:::note
Only
models that have tool support
will work with MCP Servers.
:::
We can list the models that we have downloaded like this:
bash
ollama ls
text
NAME ID SIZE MODIFIED
qwen3:latest 500a1f067a9f 5.2 GB 3 days ago
We can use the following command to see more information about the model that we've downloaded:
bash
ollama show qwen3
```text
Model
architecture qwen3
parameters 8.2B
context length 40960
embedding length 4096
quantization Q4_K_M
Capabilities
completion
tools
Parameters
repeat_penalty 1
stop "<|im_start|>"
stop "<|im_end|>"
temperature 0.6
top_k 20
top_p 0.95
License
Apache License
Version 2.0, January 2004
```
We can see from this output that the default qwen3 model has just over 8 billion parameters.
Install MCPHost {#install-mcphost}
At the time of writing (July 2025) there is no native functionality for using Ollama with MCP Servers.
However, we can use
MCPHost
to run Ollama models with MCP Servers.
MCPHost is a Go application, so you'll need to make sure that you have
Go installed
on your machine.
You can then install MCPHost by running the following command:
bash
go install github.com/mark3labs/mcphost@latest
The binary will be installed under
~/go/bin
so we need to make sure that directory is on our path.
Configuring ClickHouse MCP Server {#configure-clickhouse-mcp-server}
We can configure MCP Servers with MCPHost in YAML or JSON files.
MCPHost will look for config files in your home directory the following order: | {"source_file": "06_ollama.md"} | [
-0.03960104286670685,
-0.00628732331097126,
-0.006413768045604229,
-0.005700826644897461,
-0.014498988166451454,
-0.031930822879076004,
-0.040740471333265305,
0.05224049091339111,
0.0015636496245861053,
-0.05541088804602623,
0.04542186111211777,
0.07039924710988998,
0.03265175223350525,
-0... |
bede1b9c-54e3-473c-b00a-4272a6a12231 | We can configure MCP Servers with MCPHost in YAML or JSON files.
MCPHost will look for config files in your home directory the following order:
.mcphost.yml
or
.mcphost.json
(preferred)
.mcp.yml
or
.mcp.json
(backwards compatibility)
It uses a syntax that's similar to that used in the standard MCP configuration file.
Here's an example of a ClickHouse MCP server configuration, which we'll save to the
~/.mcphost.json
file:
json
{
"mcpServers": {
"mcp-ch": {
"type": "local",
"command": ["uv",
"run",
"--with",
"mcp-clickhouse",
"--python",
"3.10",
"mcp-clickhouse"
]
}
}
}
The main difference from the standard MCP configuration file is that we need to specify a
type
.
The type is used to indicate the transport type used by the MCP Server.
local
β stdio transport
remote
β streamable transport
builtin
β inprocess transport
We'll also need to configure the following environment variables:
bash
export CLICKHOUSE_HOST=sql-clickhouse.clickhouse.com
export CLICKHOUSE_USER=demo
export CLICKHOUSE_PASSWORD=""
:::note
In theory, you should be able to provide these variables under the
environment
key in the MCP configuration file, but we've found that this doesn't work.
:::
Running MCPHost {#running-mcphost}
Once you've configured the ClickHouse MCP server, you can run MCPHost by running the following command:
bash
mcphost --model ollama:qwen3
Or, if you want to have it use a specific config file:
bash
mcphost --model ollama:qwen3 --config ~/.mcphost.json
:::warning
If you don't provide
--model
, MCPHost will look in the environment variables for
ANTHROPIC_API_KEY
and will use the
anthropic:claude-sonnet-4-20250514
model.
:::
We should see the following output:
```text
β β
β Model loaded: ollama (qwen3) β
β MCPHost System (09:52) β
β β
β β
β Model loaded successfully on GPU β
β MCPHost System (09:52) β
β β
β β
β Loaded 3 tools from MCP servers β
β MCPHost System (09:52) β
β β
Enter your prompt (Type /help for commands, Ctrl+C to quit, ESC to cancel generation)
``` | {"source_file": "06_ollama.md"} | [
0.000016411762771895155,
-0.08581221103668213,
0.05762892961502075,
-0.01994863525032997,
0.01985994726419449,
-0.029557030647993088,
-0.05486317351460457,
0.03973113372921944,
-0.03727150708436966,
-0.0008906807051971555,
0.026309126988053322,
0.03695604205131531,
0.026214906945824623,
0.... |
e3f07537-96b1-490d-9883-8ea458bf4e0b | Enter your prompt (Type /help for commands, Ctrl+C to quit, ESC to cancel generation)
```
We can use the
/servers
command to list the MCP Servers:
text
β β
β ## Configured MCP Servers β
β β
β 1. mcp-ch β
β MCPHost System (10:00) β
β
And
/tools
to list the tools available:
text
β ## Available Tools β
β β
β 1. mcp-ch__list_databases β
β 2. mcp-ch__list_tables β
β 3. mcp-ch__run_select_query
We can then ask the model questions about the databases/tables available in the ClickHouse SQL playground.
In our experience when using smaller models (the default qwen3 model has 8 billion parameters), you'll need to be more specific about what you'd like it to do.
For example, you'll need to explicitly ask it to list the databases and tables rather than straight away asking it to query a certain table.
You can partially eleviate this problem by using a large model (e.g. qwen3:14b), but that will run more slowly on consumer hardware. | {"source_file": "06_ollama.md"} | [
0.0403413288295269,
-0.13385482132434845,
0.010572466999292374,
0.04475059360265732,
-0.016407135874032974,
-0.020345380529761314,
0.028788814321160316,
-0.023166293278336525,
-0.05100846290588379,
0.04773910716176033,
0.0315573513507843,
-0.06895090639591217,
0.08065929263830185,
-0.04990... |
31e154aa-0f1e-49b4-95b8-82ef2d53df03 | slug: /use-cases/AI/MCP/anythingllm
sidebar_label: 'Integrate AnythingLLM'
title: 'Set Up ClickHouse MCP Server with AnythingLLM and ClickHouse Cloud'
pagination_prev: null
pagination_next: null
description: 'This guide explains how to set up AnythingLLM with a ClickHouse MCP server using Docker.'
keywords: ['AI', 'AnythingLLM', 'MCP']
show_related_blogs: true
doc_type: 'guide'
import {CardHorizontal} from '@clickhouse/click-ui/bundled'
import Link from '@docusaurus/Link';
import Image from '@theme/IdealImage';
import Conversation from '@site/static/images/use-cases/AI_ML/MCP/allm_conversation.png';
import MCPServers from '@site/static/images/use-cases/AI_ML/MCP/allm_mcp-servers.png';
import ToolIcon from '@site/static/images/use-cases/AI_ML/MCP/alm_tool-icon.png';
Using ClickHouse MCP server with AnythingLLM
This guide explains how to set up
AnythingLLM
with a ClickHouse MCP server using Docker
and connect it to the ClickHouse example datasets.
Install Docker {#install-docker}
You will need Docker to run LibreChat and the MCP server. To get Docker:
1. Visit
docker.com
2. Download Docker desktop for your operating system
3. Install Docker by following the instructions for your operating system
4. Open Docker Desktop and ensure it is running
For more information, see the
Docker documentation
.
Pull AnythingLLM Docker image {#pull-anythingllm-docker-image}
Run the following command to pull the AnythingLLM Docker image to your machine:
bash
docker pull anythingllm/anythingllm
Setup storage location {#setup-storage-location}
Create a directory for storage and initialize the environment file:
bash
export STORAGE_LOCATION=$PWD/anythingllm && \
mkdir -p $STORAGE_LOCATION && \
touch "$STORAGE_LOCATION/.env"
Configure MCP Server config file {#configure-mcp-server-config-file}
Create the
plugins
directory:
bash
mkdir -p "$STORAGE_LOCATION/plugins"
Create a file called
anythingllm_mcp_servers.json
in the
plugins
directory and add the following contents:
json
{
"mcpServers": {
"mcp-clickhouse": {
"command": "uv",
"args": [
"run",
"--with",
"mcp-clickhouse",
"--python",
"3.10",
"mcp-clickhouse"
],
"env": {
"CLICKHOUSE_HOST": "sql-clickhouse.clickhouse.com",
"CLICKHOUSE_USER": "demo",
"CLICKHOUSE_PASSWORD": ""
}
}
}
}
If you want to explore your own data, you can do so by
using the
host, username and password
of your own ClickHouse Cloud service.
Start the AnythingLLM Docker container {#start-anythingllm-docker-container}
Run the following command to start the AnythingLLM Docker container:
bash
docker run -p 3001:3001 \
--cap-add SYS_ADMIN \
-v ${STORAGE_LOCATION}:/app/server/storage \
-v ${STORAGE_LOCATION}/.env:/app/server/.env \
-e STORAGE_DIR="/app/server/storage" \
mintplexlabs/anythingllm | {"source_file": "04_anythingllm.md"} | [
-0.0265756044536829,
-0.004423619247972965,
0.007640710566192865,
-0.03353913128376007,
0.01531718298792839,
-0.026172500103712082,
-0.012533466331660748,
0.0048008933663368225,
-0.02334190160036087,
0.012681787833571434,
0.05186345428228378,
-0.017951494082808495,
0.06863623857498169,
-0.... |
52711efc-f916-4971-850e-6ad32777a1e7 | Once that's started, navigate to
http://localhost:3001
in your browser.
Select the model that you want to use and provide your API key.
Wait for MCP Servers to start up {#wait-for-mcp-servers-to-start-up}
Click on the tool icon in the bottom left-hand side of the UI:
Click on
Agent Skills
and look under the
MCP Servers
section.
Wait until you see
Mcp ClickHouse
set to
On
Chat with ClickHouse MCP Server with AnythingLLM {#chat-with-clickhouse-mcp-server-with-anythingllm}
We're now ready to start a chat.
To make MCP Servers available to a chat, you'll need to prefix the first message in the conversation with
@agent
. | {"source_file": "04_anythingllm.md"} | [
-0.045685261487960815,
-0.13738781213760376,
0.03381915017962456,
-0.010417081415653229,
-0.037238847464323044,
-0.002768926089629531,
-0.033862508833408356,
-0.017397532239556313,
0.0010110174771398306,
0.034353017807006836,
0.003251461312174797,
-0.016953522339463234,
0.026234686374664307,... |
b140ce0f-f928-47a2-9971-6492d670267c | slug: /use-cases/AI/MCP/remote_mcp
sidebar_label: 'ClickHouse Cloud remote MCP'
title: 'Enabling the ClickHouse Cloud Remote MCP Server'
pagination_prev: null
pagination_next: null
description: 'This guide explains how to enable and use the ClickHouse Cloud Remote MCP'
keywords: ['AI', 'ClickHouse Cloud', 'MCP']
show_related_blogs: true
sidebar_position: 1
doc_type: 'guide'
import {CardHorizontal} from '@clickhouse/click-ui/bundled'
import Link from '@docusaurus/Link';
import Image from '@theme/IdealImage';
import img1 from '@site/static/images/use-cases/AI_ML/MCP/1connectmcpmodal.png';
import img2 from '@site/static/images/use-cases/AI_ML/MCP/2enable_mcp.png';
import img3 from '@site/static/images/use-cases/AI_ML/MCP/3oauth.png';
import img4 from '@site/static/images/use-cases/AI_ML/MCP/4oauth_success.png';
import img5 from '@site/static/images/use-cases/AI_ML/MCP/5connected_mcp_claude.png';
import img6 from '@site/static/images/use-cases/AI_ML/MCP/6slash_mcp_claude.png';
import img7 from '@site/static/images/use-cases/AI_ML/MCP/7usage_mcp.png';
Enabling the ClickHouse Cloud Remote MCP Server
This guide explains how to enable and use the ClickHouse Cloud Remote MCP Server. We will use Claude Code as an MCP Client for this example, but any LLM client that supports MCP can be used.
Enable the remote MCP server for your ClickHouse Cloud service {#enable-remote-mcp-server}
Connect to your ClickHouse Cloud Service, click on the
Connect
button, and enable the Remote MCP Server for your Service
Copy the URL of the ClickHouse Cloud MCP Server from the
Connect
view or below
bash
https://mcp.clickhouse.cloud/mcp
Add the ClickHouse MCP Server in Claude Code {#add-clickhouse-mcp-server-claude-code}
In your working directory, run the following command to add the ClickHouse Cloud MCP Server configuration to Claude Code. In this example, we named the MCP server in the Claude Code config
clickhouse_cloud
bash
claude mcp add --transport http clickhouse_cloud https://mcp.clickhouse.cloud/mcp
1b. Depending on the MCP Client used, you can also edit the JSON config directly
json
{
"mcpServers": {
"clickhouse-remote": {
"url": "https://mcp.clickhouse.cloud/mcp"
}
}
}
Launch Claude Code in your working directory
bash
[user@host ~/Documents/repos/mcp_test] $ claude
Authenticate to ClickHouse Cloud via OAuth {#authenticate-via-oauth}
Claude Code will open a browser window on the first session. Otherwise, you can also trigger a connection by running the
/mcp
command in Claude Code and selecting the
clickhouse_cloud
MCP server
Authenticate using your ClickHouse Cloud credentials
Use the ClickHouse Cloud Remote MCP Server from Claude Code {#use-rempte-mcp-from-claude-code}
Verify in Claude Code that the remote MCP server is connected
Congratulations! You can now use the ClickHouse Cloud Remote MCP Server from Claude Code | {"source_file": "01_remote_mcp.md"} | [
-0.023341549560427666,
0.003833952359855175,
0.04964876547455788,
-0.010431730188429356,
0.06623613834381104,
0.022550703957676888,
-0.025244751945137978,
0.018957972526550293,
-0.002453987952321768,
0.012705100700259209,
0.11237819492816925,
0.051101382821798325,
0.10106749832630157,
-0.0... |
169a6af7-58cc-4958-9dbf-70dd08c6557f | Verify in Claude Code that the remote MCP server is connected
Congratulations! You can now use the ClickHouse Cloud Remote MCP Server from Claude Code
Although this example used Claude code, you can use any LLM client that supports MCP by following similar steps. | {"source_file": "01_remote_mcp.md"} | [
-0.008704437874257565,
-0.10199307650327682,
0.0504806749522686,
-0.05364556238055229,
0.00402850890532136,
0.03928661346435547,
-0.07407523691654205,
-0.025810129940509796,
-0.017108624801039696,
0.05538249388337135,
0.03873799741268158,
0.029844505712389946,
0.030848585069179535,
-0.0265... |
856443f0-32ee-4b6a-8d9c-40b7e2a13d0e | slug: /use-cases/AI/MCP
sidebar_label: 'MCP'
title: 'MCP guides'
pagination_prev: null
pagination_next: null
description: 'This page provides an intro to Model Context Protocol (MCP) and has a table of contents for MCP guides.'
keywords: ['ClickHouse', 'MCP']
show_related_blogs: true
doc_type: 'guide'
import Image from '@theme/IdealImage';
import overview from '@site/static/images/use-cases/AI_ML/MCP/mcp_overview.png';
Model Context Protocol
(MCP) is a standard developed by Anthropic that enables AI assistants to seamlessly integrate with external systems.
This protocol allows AI assistants to connect to data sources, APIs, databases, and more in a secure and standardized way.
MCP creates a universal interface between AI models and various services, eliminating the need for custom tool implementations for each integration.
You can think of it as a universal API standard designed specifically for AI systems.
The key advantage of MCP is that AI libraries only need to implement support for the protocol once.
After that, all MCP-compatible services become immediately accessible, saving AI library maintainers a lot of time.
What is MCP's architecture? {#mcp-architecture}
MCP follows a client-server architecture:
Clients (like Claude Desktop, Cursor, or VS Code) establish connections with MCP servers. You can see a collection of clients in the
awesome-mcp-clients
GitHub repository.
Servers expose tools and capabilities through standardized interfaces. You can see a collection of servers in the
awesome-mcp-servers
GitHub repository.
AI models can then use these tools to access external data and functionality when needed
A diagram showing the architecture is below:
Does ClickHouse have an MCP Server? {#clickhouse-mcp-server}
It does!
The
ClickHouse MCP Server
offers the following tools:
run_select_query
- Execute SQL queries on your ClickHouse cluster.
list_databases
- List all databases on your ClickHouse cluster.
list_tables
- List all tables in a database.
Guides for using the ClickHouse MCP Server {#clickhouse-mcp-server-guides}
Below are some guides showing how to use the ClickHouse MCP Server. | {"source_file": "index.md"} | [
-0.09161420166492462,
-0.039149221032857895,
0.016190828755497932,
-0.02462671510875225,
0.057721421122550964,
-0.0015456106048077345,
0.0069342912174761295,
0.07101943343877792,
-0.013470305129885674,
-0.013886181637644768,
0.022149888798594475,
0.016663171350955963,
0.102259561419487,
0.... |
a32d34f7-1eb7-4ab0-a958-ef9943592b74 | | Page | Description |
|-----|-----|
|
Enabling the ClickHouse Cloud Remote MCP Server
| This guide explains how to enable and use the ClickHouse Cloud Remote MCP |
|
How to build a ClickHouse-backed AI Agent with Streamlit
| Learn how to build a web-based AI Agent with Streamlit and the ClickHouse MCP Server |
|
How to build a LangChain/LangGraph AI agent using ClickHouse MCP Server.
| Learn how to build a LangChain/LangGraph AI agent that can interact with ClickHouse's SQL playground using ClickHouse's MCP Server. |
|
How to build a LlamaIndex AI agent using ClickHouse MCP Server.
| Learn how to build a LlamaIndex AI agent that can interact with ClickHouse MCP Server. |
|
How to build a PydanticAI agent using ClickHouse MCP Server.
| Learn how to build a PydanticAI agent that can interact with ClickHouse MCP Server. |
|
How to build a SlackBot agent using ClickHouse MCP Server.
| Learn how to build a SlackBot agent that can interact with ClickHouse MCP Server. |
|
How to build an AI Agent with Agno and the ClickHouse MCP Server
| Learn how build an AI Agent with Agno and the ClickHouse MCP Server |
|
How to build an AI Agent with Chainlit and the ClickHouse MCP Server
| Learn how to use Chainlit to build LLM-based chat apps together with the ClickHouse MCP Server |
|
How to build an AI Agent with Claude Agent SDK and the ClickHouse MCP Server
| Learn how build an AI Agent with Claude Agent SDK and the ClickHouse MCP Server |
|
How to build an AI Agent with CopilotKit and the ClickHouse MCP Server
| Learn how to build an agentic application using data stored in ClickHouse with ClickHouse MCP and CopilotKit |
|
How to build an AI Agent with CrewAI and the ClickHouse MCP Server
| Learn how build an AI Agent with CrewAI and the ClickHouse MCP Server |
|
How to build an AI Agent with DSPy and the ClickHouse MCP Server
| Learn how to build an AI agent with DSPy and the ClickHouse MCP Server |
|
How to build an AI Agent with mcp-agent and the ClickHouse MCP Server
| Learn how build an AI Agent with mcp-agent and the ClickHouse MCP Server |
|
How to build an AI Agent with Microsoft Agent Framework and the ClickHouse MCP Server
| Learn how build an AI Agent with Microsoft Agent Framework and the ClickHouse MCP Server |
|
How to build an AI Agent with Upsonic and the ClickHouse MCP Server
| Learn how build an AI Agent with Upsonic and the ClickHouse MCP Server |
|
How to build an OpenAI agent using ClickHouse MCP Server.
| Learn how to build an OpenAI agent that can interact with ClickHouse MCP Server. |
|
Set Up ClickHouse MCP Server with AnythingLLM and ClickHouse Cloud
| This guide explains how to set up AnythingLLM with a ClickHouse MCP server using Docker. |
|
Set Up ClickHouse MCP Server with Claude Desktop
| This guide explains how to set up Claude Desktop with a ClickHouse MCP server. |
|
Set Up ClickHouse MCP Server with Jan.ai
| This guide explains how to set up Jan.ai with a ClickHouse MCP server. |
| | {"source_file": "index.md"} | [
-0.017608702182769775,
-0.12704214453697205,
-0.0022109001874923706,
-0.01738041825592518,
-0.03489714115858078,
-0.0233635101467371,
-0.012438198551535606,
-0.00051316455937922,
-0.08955812454223633,
0.05206948146224022,
-0.013937420211732388,
-0.06545254588127136,
0.06670859456062317,
0.... |
033752ba-4514-4f63-b404-565f01d92cfa | |
Set Up ClickHouse MCP Server with Jan.ai
| This guide explains how to set up Jan.ai with a ClickHouse MCP server. |
|
Set Up ClickHouse MCP Server with LibreChat and ClickHouse Cloud
| This guide explains how to set up LibreChat with a ClickHouse MCP server using Docker. |
|
Set Up ClickHouse MCP Server with Ollama
| This guide explains how to set up Ollama with a ClickHouse MCP server. |
|
Set Up ClickHouse MCP Server with Open WebUI and ClickHouse Cloud
| This guide explains how to set up Open WebUI with a ClickHouse MCP server using Docker. | | {"source_file": "index.md"} | [
-0.012720932252705097,
-0.04734156280755997,
0.032769426703453064,
0.014066202566027641,
-0.007740991655737162,
-0.07718969136476517,
-0.02736639231443405,
0.02451351471245289,
-0.041867710649967194,
-0.034647680819034576,
0.017961226403713226,
0.0011315681040287018,
-0.04014372080564499,
... |
9e741b1e-8bd9-4cb7-a721-63f0d48c4767 | slug: /use-cases/AI/MCP/librechat
sidebar_label: 'Integrate LibreChat'
title: 'Set Up ClickHouse MCP Server with LibreChat and ClickHouse Cloud'
pagination_prev: null
pagination_next: null
description: 'This guide explains how to set up LibreChat with a ClickHouse MCP server using Docker.'
keywords: ['AI', 'Librechat', 'MCP']
show_related_blogs: true
doc_type: 'guide'
import {CardHorizontal} from '@clickhouse/click-ui/bundled'
import Link from '@docusaurus/Link';
import Image from '@theme/IdealImage';
import LibreInterface from '@site/static/images/use-cases/AI_ML/MCP/librechat.png';
Using ClickHouse MCP server with LibreChat
This guide explains how to set up LibreChat with a ClickHouse MCP server using Docker
and connect it to the ClickHouse example datasets.
Install docker {#install-docker}
You will need Docker to run LibreChat and the MCP server. To get Docker:
1. Visit
docker.com
2. Download Docker desktop for your operating system
3. Install Docker by following the instructions for your operating system
4. Open Docker Desktop and ensure it is running
For more information, see the
Docker documentation
.
Clone the LibreChat repository {#clone-librechat-repo}
Open a terminal (command prompt, terminal or PowerShell) and clone the
LibreChat repository using the following command:
bash
git clone https://github.com/danny-avila/LibreChat.git
cd LibreChat
Create and edit the .env file {#create-and-edit-env-file}
Copy the example configuration file from
.env.example
to
.env
:
bash
cp .env.example .env
Open the
.env
file in your favorite text editor. You will see sections for
many popular LLM providers, including OpenAI, Anthropic, AWS bedrock etc, for
example:
```text title=".venv"
============
Anthropic
============
highlight-next-line
ANTHROPIC_API_KEY=user_provided
ANTHROPIC_MODELS=claude-opus-4-20250514,claude-sonnet-4-20250514,claude-3-7-sonnet-20250219,claude-3-5-sonnet-20241022,claude-3-5-haiku-20241022,claude-3-opus-20240229,claude-3-sonnet-20240229,claude-3-haiku-20240307
ANTHROPIC_REVERSE_PROXY=
```
Replace
user_provided
with your API key for the LLM provider you want to use.
:::note Using a local LLM
If you don't have an API key you can use a local LLM like Ollama. You'll see how
to do this later in step
"Install Ollama"
. For now
don't modify the .env file and continue with the next steps.
:::
Create a librechat.yaml file {#create-librechat-yaml-file}
Run the following command to create a new
librechat.yaml
file:
bash
cp librechat.example.yaml librechat.yaml
This creates the main
configuration file
for LibreChat.
Add ClickHouse MCP server to Docker compose {#add-clickhouse-mcp-server-to-docker-compose}
Next we'll add the ClickHouse MCP server to the LibreChat Docker compose file
so that the LLM can interact with the
ClickHouse SQL playground
.
Create a file called
docker-compose.override.yml
and add the following configuration to it: | {"source_file": "03_librechat.md"} | [
0.023585060611367226,
0.0018434516387060285,
0.04319852218031883,
0.014027413912117481,
0.0182789359241724,
-0.024864250794053078,
-0.024152960628271103,
0.06976757198572159,
-0.01162037719041109,
-0.0407058447599411,
0.05279412120580673,
0.023078853264451027,
0.04717868193984032,
-0.04492... |
df6c21f9-aab4-42ec-ab6b-557f8734cc71 | ClickHouse SQL playground
.
Create a file called
docker-compose.override.yml
and add the following configuration to it:
yml title="docker-compose.override.yml"
services:
api:
volumes:
- ./librechat.yaml:/app/librechat.yaml
mcp-clickhouse:
image: mcp/clickhouse
container_name: mcp-clickhouse
ports:
- 8001:8000
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- CLICKHOUSE_HOST=sql-clickhouse.clickhouse.com
- CLICKHOUSE_USER=demo
- CLICKHOUSE_PASSWORD=
- CLICKHOUSE_MCP_SERVER_TRANSPORT=sse
- CLICKHOUSE_MCP_BIND_HOST=0.0.0.0
If you want to explore your own data, you can do so by
using the
host, username and password
of your own ClickHouse Cloud service.
Configure MCP server in librechat.yaml {#configure-mcp-server-in-librechat-yaml}
Open
librechat.yaml
and place the following configuration at the end of the file:
yml
mcpServers:
clickhouse-playground:
type: sse
url: http://host.docker.internal:8001/sse
This configures LibreChat to connect to the MCP server running on Docker.
Find the following line:
text title="librechat.yaml"
socialLogins: ['github', 'google', 'discord', 'openid', 'facebook', 'apple', 'saml']
For simplicity, we will remove the need to authenticate for now:
text title="librechat.yaml"
socialLogins: []
Add a local LLM using Ollama (optional) {#add-local-llm-using-ollama}
Install Ollama {#install-ollama}
Go to the
Ollama website
and install Ollama for your system.
Once installed, you can run a model like this:
bash
ollama run qwen3:32b
This will pull the model to your local machine if it is not present.
For a list of models see the
Ollama library
Configure Ollama in librechat.yaml {#configure-ollama-in-librechat-yaml}
Once the model has downloaded, configure it in
librechat.yaml
:
text title="librechat.yaml"
custom:
- name: "Ollama"
apiKey: "ollama"
baseURL: "http://host.docker.internal:11434/v1/"
models:
default:
[
"qwen3:32b"
]
fetch: false
titleConvo: true
titleModel: "current_model"
summarize: false
summaryModel: "current_model"
forcePrompt: false
modelDisplayLabel: "Ollama"
Start all services {#start-all-services}
From the root of the LibreChat project folder, run the following command to start the services:
bash
docker compose up
Wait until all services are fully running.
Open LibreChat in your browser {#open-librechat-in-browser}
Once all services are up and running, open your browser and go to
http://localhost:3080/
Create a free LibreChat account if you don't yet have one, and sign in. You should
now see the LibreChat interface connected to the ClickHouse MCP server, and optionally,
your local LLM.
From the chat interface, select
clickhouse-playground
as your MCP server:
You can now prompt the LLM to explore the ClickHouse example datasets. Give it a go: | {"source_file": "03_librechat.md"} | [
0.06023214012384415,
-0.04804929718375206,
0.03900248184800148,
0.023644283413887024,
-0.009400556795299053,
-0.08964459598064423,
0.011917218565940857,
-0.013161987997591496,
0.010051146149635315,
-0.05089984089136124,
-0.018082691356539726,
-0.02282484620809555,
-0.0008941771229729056,
-... |
6b4d84f5-96aa-455a-8e40-e325134bcc24 | From the chat interface, select
clickhouse-playground
as your MCP server:
You can now prompt the LLM to explore the ClickHouse example datasets. Give it a go:
text title="Prompt"
What datasets do you have access to? | {"source_file": "03_librechat.md"} | [
0.06644495576620102,
-0.09134312719106674,
0.03644536808133125,
0.014641143381595612,
0.02520061284303665,
0.014838479459285736,
0.03483179211616516,
-0.04319201037287712,
-0.020941734313964844,
-0.016167767345905304,
0.004047262016683817,
-0.04640436917543411,
0.0049925679340958595,
-0.01... |
42847838-8a82-42d2-82ea-4992f9811a1c | slug: /use-cases/AI/MCP/claude-desktop
sidebar_label: 'Integrate Claude Desktop'
title: 'Set Up ClickHouse MCP Server with Claude Desktop'
pagination_prev: null
pagination_next: null
description: 'This guide explains how to set up Claude Desktop with a ClickHouse MCP server.'
keywords: ['AI', 'Librechat', 'MCP']
show_related_blogs: true
doc_type: 'guide'
import {CardHorizontal} from '@clickhouse/click-ui/bundled'
import Link from '@docusaurus/Link';
import Image from '@theme/IdealImage';
import ClaudeDesktopConfig from '@site/static/images/use-cases/AI_ML/MCP/claude-desktop-config.png';
import FindMCPServers from '@site/static/images/use-cases/AI_ML/MCP/find-mcp-servers.gif';
import MCPPermission from '@site/static/images/use-cases/AI_ML/MCP/mcp-permission.png';
import ClaudeConversation from '@site/static/images/use-cases/AI_ML/MCP/claude-conversation.png';
Using ClickHouse MCP server with Claude Desktop
This guide explains how to set up Claude Desktop with a ClickHouse MCP server using uv
and connect it to the ClickHouse example datasets.
Install uv {#install-uv}
You will need to install
uv
to follow the instructions in this guide.
If you don't want to use uv, you will need to update the MCP Server config to use an alternative package manager.
Download Claude Desktop {#download-claude-desktop}
You'll also need to install the Claude Desktop app, which you can download from the
Claude Desktop website
.
Configuring ClickHouse MCP server {#configure-clickhouse-mcp-server}
Once you've got Claude Desktop installed, it's time to configure the
ClickHouse MCP server
.
We can do this via the
Claude Desktop configuration file
.
To find this file, first go to the settings page (
Cmd+,
on a Mac) and then click on the
Developer
tab on the left menu.
You'll then see the following screen, on which you'll need to click on the
Edit config
button:
This will take you to a directory containing the configuration file (
claude_desktop_config.json
).
The first time you open that file, it will likely contain the following content:
json
{
"mcpServers": {}
}
The
mcpServers
dictionary takes in the name of an MCP Server as a key, and a dictionary of configuration options as a value.
For example, the ClickHouse MCP server configuration connecting to the ClickHouse Playground would look like this:
json
{
"mcpServers": {
"mcp-clickhouse": {
"command": "uv",
"args": [
"run",
"--with",
"mcp-clickhouse",
"--python",
"3.10",
"mcp-clickhouse"
],
"env": {
"CLICKHOUSE_HOST": "sql-clickhouse.clickhouse.com",
"CLICKHOUSE_PORT": "8443",
"CLICKHOUSE_USER": "demo",
"CLICKHOUSE_PASSWORD": "",
"CLICKHOUSE_SECURE": "true",
"CLICKHOUSE_VERIFY": "true",
"CLICKHOUSE_CONNECT_TIMEOUT": "30",
"CLICKHOUSE_SEND_RECEIVE_TIMEOUT": "30"
}
}
}
} | {"source_file": "02_claude-desktop.md"} | [
-0.012464979663491249,
0.011760689318180084,
0.010553166270256042,
-0.02581544779241085,
0.023471059277653694,
0.006946061737835407,
-0.013705668039619923,
0.0651705339550972,
-0.04593779146671295,
-0.01367809809744358,
0.07429587095975876,
0.014494205825030804,
0.10831161588430405,
-0.009... |
80fd6fd2-cd53-4d86-8bb9-2f4d20149933 | Once you've updated the config, you'll need to restart Claude Desktop for the changes to take effect.
:::warning
Depending on how you installed
uv
, you might receive the following error when restarting Claude Desktop:
text
MCP mcp-clickhouse: spawn uv ENOENT
If that happens, you'll need to update the
command
to have the full path to
uv
. e.g. if you've installed via Cargo, it will be
/Users/<username>/.cargo/bin/uv
:::
Using ClickHouse MCP server {#using-clickhouse-mcp-server}
Once you've restarted Claude Desktop, you can find the ClickHouse MCP server by clicking on the
Search and tools
icon:
You can then choose whether to disable all or some of the tools.
Now we're ready to ask Claude some questions that will result in it using the ClickHouse MCP server.
For example, we could ask it
What's the most interesting dataset in the SQL playground?
.
Claude will ask us to confirm the use of each tool in the MCP Server the first time that it's called:
Below you can see part of a conversation that includes some tool calls to the ClickHouse MCP Server: | {"source_file": "02_claude-desktop.md"} | [
0.006140200886875391,
-0.019296294078230858,
-0.002646797802299261,
-0.010209484957158566,
-0.021441569551825523,
0.01763935387134552,
0.03201679885387421,
0.029324974864721298,
-0.1127718836069107,
0.013332749716937542,
0.06307283788919449,
-0.04457852244377136,
0.04719700291752815,
0.030... |
87504926-8b12-42b1-9c61-e91ed4d2a3c9 | slug: /use-cases/AI/MCP/open-webui
sidebar_label: 'Integrate Open WebUI'
title: 'Set Up ClickHouse MCP Server with Open WebUI and ClickHouse Cloud'
pagination_prev: null
pagination_next: null
description: 'This guide explains how to set up Open WebUI with a ClickHouse MCP server using Docker.'
keywords: ['AI', 'Open WebUI', 'MCP']
show_related_blogs: true
doc_type: 'guide'
import {CardHorizontal} from '@clickhouse/click-ui/bundled'
import Link from '@docusaurus/Link';
import Image from '@theme/IdealImage';
import Endpoints from '@site/static/images/use-cases/AI_ML/MCP/0_endpoints.png';
import Settings from '@site/static/images/use-cases/AI_ML/MCP/1_settings.png';
import ToolsPage from '@site/static/images/use-cases/AI_ML/MCP/2_tools_page.png';
import AddTool from '@site/static/images/use-cases/AI_ML/MCP/3_add_tool.png';
import ToolsAvailable from '@site/static/images/use-cases/AI_ML/MCP/4_tools_available.png';
import ListOfTools from '@site/static/images/use-cases/AI_ML/MCP/5_list_of_tools.png';
import Connections from '@site/static/images/use-cases/AI_ML/MCP/6_connections.png';
import AddConnection from '@site/static/images/use-cases/AI_ML/MCP/7_add_connection.png';
import OpenAIModels from '@site/static/images/use-cases/AI_ML/MCP/8_openai_models_more.png';
import Conversation from '@site/static/images/use-cases/AI_ML/MCP/9_conversation.png';
Using ClickHouse MCP server with Open WebUI
This guide explains how to set up
Open WebUI
with a ClickHouse MCP server
and connect it to the ClickHouse example datasets.
Install uv {#install-uv}
You will need to install
uv
to follow the instructions in this guide.
If you don't want to use uv, you will need to update the MCP Server config to use an alternative package manager.
Launch Open WebUI {#launch-open-webui}
To launch Open WebUI, you can run the following command:
bash
uv run --with open-webui open-webui serve
Navigate to http://localhost:8080/ to see the UI.
Configure ClickHouse MCP Server {#configure-clickhouse-mcp-server}
To setup the ClickHouse MCP Server, we'll need to convert the MCP Server to Open API endpoints.
Let's first set environmental variables that will let us connect to the ClickHouse SQL Playground:
bash
export CLICKHOUSE_HOST="sql-clickhouse.clickhouse.com"
export CLICKHOUSE_USER="demo"
export CLICKHOUSE_PASSWORD=""
And, then, we can run
mcpo
to create the Open API endpoints:
bash
uvx mcpo --port 8000 -- uv run --with mcp-clickhouse --python 3.10 mcp-clickhouse
You can see a list of the endpoints created by navigating to http://localhost:8000/docs
To use these endpoints with Open WebUI, we need to navigate to settings:
Click on
Tools
:
Add http://localhost:8000 as the tool URL:
Once we've done this, we should see a
1
next to the tool icon on the chat bar:
If we click on the tool icon, we can then list the available tools:
Configure OpenAI {#configure-openai} | {"source_file": "05_open-webui.md"} | [
-0.015205557458102703,
-0.003387295175343752,
0.019742485135793686,
0.01554943062365055,
0.023661181330680847,
-0.0431867316365242,
-0.038871053606271744,
0.04889298602938652,
-0.0527118556201458,
-0.015580207109451294,
0.0783877968788147,
-0.017306553199887276,
0.04758475720882416,
0.0215... |
ab6cf393-57ac-4d38-93a7-b7219c8a7709 | Once we've done this, we should see a
1
next to the tool icon on the chat bar:
If we click on the tool icon, we can then list the available tools:
Configure OpenAI {#configure-openai}
By default, Open WebUI works with Ollama models, but we can add OpenAI compatible endpoints as well.
These are configured via the settings menu, but this time we need to click on the
Connections
tab:
Let's add the endpoint and our OpenAI key:
The OpenAI models will then be available on the top menu:
Chat to ClickHouse MCP Server with Open WebUI {#chat-to-clickhouse-mcp-server}
We can then have a conversation and Open WebUI will call the MCP Server if necessary: | {"source_file": "05_open-webui.md"} | [
-0.04294782131910324,
-0.08923473209142685,
0.004843581467866898,
0.0586484931409359,
-0.005538763478398323,
-0.08320771902799606,
-0.018749916926026344,
-0.011471482925117016,
0.004045314621180296,
-0.04416137933731079,
0.0382261723279953,
-0.04775434732437134,
-0.07177083194255829,
0.047... |
949e98a1-23ec-4f86-ba3e-ec660fe489b5 | slug: /use-cases/AI/MCP/janai
sidebar_label: 'Integrate Jan.ai'
title: 'Set Up ClickHouse MCP Server with Jan.ai'
pagination_prev: null
pagination_next: null
description: 'This guide explains how to set up Jan.ai with a ClickHouse MCP server.'
keywords: ['AI', 'Jan.ai', 'MCP']
show_related_blogs: true
doc_type: 'guide'
import {CardHorizontal} from '@clickhouse/click-ui/bundled'
import Link from '@docusaurus/Link';
import Image from '@theme/IdealImage';
import OpenAIModels from '@site/static/images/use-cases/AI_ML/MCP/0_janai_openai.png';
import MCPServers from '@site/static/images/use-cases/AI_ML/MCP/1_janai_mcp_servers.png';
import MCPServersList from '@site/static/images/use-cases/AI_ML/MCP/2_janai_mcp_servers_list.png';
import MCPForm from '@site/static/images/use-cases/AI_ML/MCP/3_janai_add_mcp_server.png';
import MCPEnabled from '@site/static/images/use-cases/AI_ML/MCP/4_janai_toggle.png';
import MCPTool from '@site/static/images/use-cases/AI_ML/MCP/5_jani_tools.png';
import Question from '@site/static/images/use-cases/AI_ML/MCP/6_janai_question.png';
import MCPToolConfirm from '@site/static/images/use-cases/AI_ML/MCP/7_janai_tool_confirmation.png';
import ToolsCalled from '@site/static/images/use-cases/AI_ML/MCP/8_janai_tools_called.png';
import ToolsCalledExpanded from '@site/static/images/use-cases/AI_ML/MCP/9_janai_tools_called_expanded.png';
import Result from '@site/static/images/use-cases/AI_ML/MCP/10_janai_result.png';
Using ClickHouse MCP server with Jan.ai
This guide explains how to use the ClickHouse MCP Server with
Jan.ai
.
Install Jan.ai {#install-janai}
Jan.ai is an open source ChatGPT-alternative that runs 100% offline.
You can download Jan.ai for
Mac
,
Windows
, or
Linux
.
It's a native app, so once it's downloaded, you can launch it.
Add LLM to Jan.ai {#add-llm-to-janai}
We can enabled models via the settings menu.
To enable OpenAI, we need to provide an API key, as shown below:
Enable MCP Servers {#enable-mcp-servers}
At the time of writing, MCP Servers are an experimental feature in Jan.ai.
We can enable them by toggling experimental features:
Once that toggle is pressed, we'll see
MCP Servers
on the left menu.
Configure ClickHouse MCP Server {#configure-clickhouse-mcp-server}
If we click on the
MCP Servers
menu, we'll see a list of MCP servers that we can connect to:
There servers are all disabled by default, but we can able them by clicking the toggle.
To install the ClickHouse MCP Server, we need to click on the
+
icon and then populate the form with the following:
Once we've done that, we'll need to toggle the ClickHouse Server if it's not already toggled:
The ClickHouse MCP Server's tools will now be visible on the chat dialog:
Chat to ClickHouse MCP Server with Jan.ai {#chat-to-clickhouse-mcp-server}
It's time to have a conversation about some data stored in ClickHouse!
Let's ask a question: | {"source_file": "07_janai.md"} | [
-0.020870491862297058,
0.007679789327085018,
0.014431839808821678,
-0.011460625566542149,
0.035144366323947906,
-0.008328333497047424,
-0.036847010254859924,
0.028859954327344894,
-0.017940187826752663,
-0.02041703648865223,
0.08081882447004318,
0.020038636401295662,
0.08550816774368286,
-... |
132c7c3f-778c-4d00-8741-6f7c7f8f052e | Chat to ClickHouse MCP Server with Jan.ai {#chat-to-clickhouse-mcp-server}
It's time to have a conversation about some data stored in ClickHouse!
Let's ask a question:
Jan.ai will ask confirmation before calling a tool:
It will then show us the list of tool calls that were made:
If we click on the tool call, we can see the details of the call:
And then underneath, we have our result: | {"source_file": "07_janai.md"} | [
-0.06566592305898666,
-0.03566286712884903,
-0.004654652439057827,
0.02633533626794815,
0.021930916234850883,
-0.06545958667993546,
0.08305849879980087,
-0.018793776631355286,
0.0459173284471035,
-0.027115000411868095,
-0.025193020701408386,
-0.0012372798519209027,
-0.047789961099624634,
-... |
ec86c736-52f8-49fb-88bb-f20ab78be745 | slug: /use-cases/AI_ML/AIChat
sidebar_label: 'AI chat'
title: 'Using AI Chat in ClickHouse Cloud'
pagination_prev: null
pagination_next: null
description: 'Guide to enabling and using the AI Chat feature in ClickHouse Cloud Console'
keywords: ['AI', 'ClickHouse Cloud', 'Chat', 'SQL Console', 'Agent', 'Docs AI']
show_related_blogs: true
sidebar_position: 2
doc_type: 'guide'
import Link from '@docusaurus/Link';
import Image from '@theme/IdealImage';
import img_open from '@site/static/images/use-cases/AI_ML/AIChat/1_open_chat.png';
import img_consent from '@site/static/images/use-cases/AI_ML/AIChat/2_consent.png';
import img_modes from '@site/static/images/use-cases/AI_ML/AIChat/3_modes.png';
import img_thinking from '@site/static/images/use-cases/AI_ML/AIChat/4_thinking.png';
import img_history from '@site/static/images/use-cases/AI_ML/AIChat/5_history.png';
import img_result_actions from '@site/static/images/use-cases/AI_ML/AIChat/6_result_actions.png';
import img_new_tab from '@site/static/images/use-cases/AI_ML/AIChat/7_open_in_editor.png';
Using AI Chat in ClickHouse Cloud
This guide explains how to enable and use the AI Chat feature in the ClickHouse Cloud Console.
Prerequisites {#prerequisites}
You must have access to a ClickHouse Cloud organization with AI features enabled (contact your org admin or support if unavailable).
Open the AI Chat panel {#open-panel}
Navigate to a ClickHouse Cloud service.
In the left sidebar, click the sparkle icon labeled βAsk AIβ.
(Shortcut) Press
β
+
'
(macOS) or
Ctrl
+
'
(Linux/Windows) to toggle open.
Accept the data usage consent (first run) {#consent}
On first use you are prompted with a consent dialog describing data handling and thirdβparty LLM sub-processors.
Review and accept to proceed. If you decline, the panel will not open.
Choose a chat mode {#modes}
AI Chat currently supports:
Agent
: Multiβstep reasoning over schema + metadata (service must be awake).
Docs AI (Ask)
: Focused Q&A grounded in official ClickHouse documentation and bestβpractice references.
Use the mode selector at the bottom-left of the flyout to switch.
Compose and send a message {#compose}
Type your question (e.g. βCreate a materialized view to aggregate daily events by userβ).
Press
Enter
to send (use
Shift
+
Enter
for a newline).
While the model is processing you can click βStopβ to interrupt.
Understanding βAgentβ thinking steps {#thinking-steps}
In Agent mode you may see expandable intermediate βthinkingβ or planning steps. These provide transparency into how the assistant forms its answer. Collapse or expand as needed.
Starting new chats {#new-chats}
Click the βNew Chatβ button to clear context and begin a fresh session.
Viewing chat history {#history}
The lower section lists your recent chats.
Select a previous chat to load its messages.
Delete a conversation using the trash icon. | {"source_file": "index.md"} | [
0.017823386937379837,
0.012136688455939293,
0.02311519905924797,
-0.011562761850655079,
0.06744901090860367,
-0.012856590561568737,
0.03484884276986122,
0.0654691606760025,
0.013436411507427692,
0.03893185034394264,
0.10919374227523804,
0.022097496315836906,
0.07798738777637482,
0.03403907... |
10422595-539d-4114-94dc-cb1d5acfe785 | Viewing chat history {#history}
The lower section lists your recent chats.
Select a previous chat to load its messages.
Delete a conversation using the trash icon.
Working with generated SQL {#sql-actions}
When the assistant returns SQL:
Review for correctness.
Click βOpen in editorβ to load the query into a new SQL tab.
Modify and execute within the Console.
Stopping or interrupting a response {#interrupt}
If a response is taking too long or diverging:
Click the βStopβ button (visible while processing).
The message is marked as interrupted; you can refine your prompt and resend.
Keyboard shortcuts {#shortcuts}
| Action | Shortcut |
| ------ | -------- |
| Open AI Chat |
β + '
/
Ctrl + '
|
| Send message |
Enter
|
| New line |
Shift + Enter
| | {"source_file": "index.md"} | [
-0.019150234758853912,
-0.03029555454850197,
0.033696942031383514,
0.051780782639980316,
-0.03204504773020744,
0.01660190336406231,
0.06643209606409073,
-0.02209649421274662,
0.0563407801091671,
0.03483646735548973,
-0.02610064297914505,
-0.010664785280823708,
-0.06677903234958649,
-0.0020... |
384edab6-7333-4b15-8bd5-f4df6931266d | slug: /use-cases/AI/MCP/ai-agent-libraries/DSPy
sidebar_label: 'Integrate DSPy'
title: 'How to build an AI Agent with DSPy and the ClickHouse MCP Server'
pagination_prev: null
pagination_next: null
description: 'Learn how to build an AI agent with DSPy and the ClickHouse MCP Server'
keywords: ['ClickHouse', 'MCP', 'DSPy']
show_related_blogs: true
doc_type: 'guide'
How to build an AI Agent with DSPy and the ClickHouse MCP Server
In this guide, you'll learn how to build an AI agent with
DSPy
that
can interact with
ClickHouse's SQL playground
using
ClickHouse's MCP Server
.
Prerequisites {#prerequisites}
You'll need to have Python installed on your system.
You'll need to have
pip
installed on your system.
You'll need an Anthropic API key, or API key from another LLM provider
You can run the following steps either from your Python REPL or via script.
:::note Example notebook
This example can be found as a notebook in the
examples repository
.
:::
Install libraries {#install-libraries}
Run the following commands using
pip
to install the required libraries:
shell
pip install -q --upgrade pip
pip install -q dspy
pip install -q mcp
Setup credentials {#setup-credentials}
Next, you'll need to provide your Anthropic API key:
python
import os
os.environ["ANTHROPIC_API_KEY"] = getpass.getpass("Enter Anthropic API Key:")
:::note Using another LLM provider
If you don't have an Anthropic API key, and want to use another LLM provider,
you can find the instructions for setting up your credentials in the
DSPy docs
:::
Next, define the credentials needed to connect to the ClickHouse SQL playground:
python
env = {
"CLICKHOUSE_HOST": "sql-clickhouse.clickhouse.com",
"CLICKHOUSE_PORT": "8443",
"CLICKHOUSE_USER": "demo",
"CLICKHOUSE_PASSWORD": "",
"CLICKHOUSE_SECURE": "true"
}
Initialize MCP Server {#initialize-mcp}
Now configure the ClickHouse MCP Server to point at the ClickHouse SQL playground.
```python
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
import dspy
server_params = StdioServerParameters(
command="uv",
args=[
'run',
'--with', 'mcp-clickhouse',
'--python', '3.13',
'mcp-clickhouse'
],
env=env
)
```
Initialize LLM {#initialize-llm}
Next, initialize the LLM with the following line:
python
dspy.configure(lm=dspy.LM("anthropic/claude-sonnet-4-20250514"))
Run the agent {#run-the-agent}
Finally, initialize and run the agent:
```python
class DataAnalyst(dspy.Signature):
"""You are a data analyst. You'll be asked questions and you need to try to answer them using the tools you have access to. """
user_request: str = dspy.InputField()
process_result: str = dspy.OutputField(
desc=(
"Answer to the query"
)
)
from utils import print_dspy_result | {"source_file": "dspy.md"} | [
-0.0385126918554306,
-0.0912727564573288,
-0.03117743879556656,
0.0008116689859889448,
-0.049648601561784744,
-0.019725214689970016,
0.011465511284768581,
0.026509879156947136,
-0.10006769746541977,
-0.006757085211575031,
0.017917310819029808,
0.023301566019654274,
0.09793180227279663,
-0.... |
dec23190-0d2e-448f-b4c6-b249579e25e8 | user_request: str = dspy.InputField()
process_result: str = dspy.OutputField(
desc=(
"Answer to the query"
)
)
from utils import print_dspy_result
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
tools = await session.list_tools()
dspy_tools = []
for tool in tools.tools:
dspy_tools.append(dspy.Tool.from_mcp_tool(session, tool))
react = dspy.ReAct(DataAnalyst, tools=dspy_tools)
result = await react.acall(user_request="What's the most popular Amazon product category")
print_dspy_result(result)
```
```response title="Response"
π€ DSPy ReAct Result
π STEP 1
π§ THINKING: I need to find information about Amazon product categories and determine which one is most popular. First, I should explore what databases are available to see if there's any Amazon-related data.
π§ TOOL: list_databases
π RESULT:
amazon
bluesky
country
covid
default
dns
environmental
food
forex
geo
git
github
hackernews
imdb
logs
metrica
mgbench
mta
noaa
nyc_taxi
nypd
ontime
opensky
otel
otel_v2
pypi
random
reddit
rubygems
sta...
π STEP 2
π§ THINKING: Great! I can see there's an "amazon" database available. This is exactly what I need to find information about Amazon product categories. Let me explore the tables in the amazon database to see what data is available.
π§ TOOL: list_tables
Args: {'database': 'amazon'}
π RESULT:
{
"database": "amazon",
"name": "amazon_reviews",
"comment": "",
"columns": [
{
"name": "review_date",
"type": "Date",
"default_type": "",
"default_expression": "",...
π STEP 3
π§ THINKING: Perfect! I found the amazon_reviews table which contains a product_category column. This table has over 150 million rows of Amazon review data, which should give me a good representation of product popularity. To find the most popular Amazon product category, I need to count the number of reviews per category and order them by count in descending order. The category with the most reviews would be considered the most popular.
π§ TOOL: run_select_query
Args: {'query': 'SELECT product_category, COUNT(*) as review_count FROM amazon.amazon_reviews GROUP BY product_category ORDER BY review_count DESC LIMIT 10'}
π RESULT:
1. Books: 19,530,930 reviews
2. Digital_Ebook_Purchase: 17,622,107 reviews
3. Wireless: 9,001,567 reviews
4. PC: 6,908,245 reviews
5. Home: 6,221,321 reviews
... and 5 more
π STEP 4 | {"source_file": "dspy.md"} | [
0.02690088376402855,
-0.03364497050642967,
-0.09529408067464828,
0.04925234988331795,
0.021730661392211914,
0.003650224767625332,
0.023644795641303062,
0.06882352381944656,
0.012098587118089199,
-0.0304777380079031,
-0.002974348608404398,
-0.026987727731466293,
-0.02334073930978775,
-0.083... |
8e9700dc-9a99-46c5-8355-7e27242c1798 | π STEP 4
π§ THINKING: Perfect! I have found the answer to the user's question. Based on the Amazon reviews data, I can clearly see that "Books" is the most popular Amazon product category with 19,530,930 reviews, followed by "Digital_Ebook_Purchase" with 17,622,107 reviews. The data shows a clear ranking of product categories by popularity based on the number of reviews each category has received. I now have all the information needed to answer the user's question.
π§ TOOL: finish
π RESULT:
Completed.
π― REASONING
To answer the question about the most popular Amazon product category, I accessed the Amazon database and found a table called "amazon_reviews" with over 150 million review records. This table contains a "product_category" column that allows me to analyze product popularity based on review volume. I ran a query to count the number of reviews per product category and ordered them by count in descending order. The assumption here is that categories with more reviews indicate higher popularity, as more reviews typically correlate with more purchases and customer engagement.
β
FINAL RESULT
Based on Amazon review data containing over 150 million records, the most popular Amazon product category is
Books
with 19,530,930 reviews.
Here are the top 10 most popular Amazon product categories by review count:
Books
- 19,530,930 reviews
Digital_Ebook_Purchase
- 17,622,107 reviews
Wireless
- 9,001,567 reviews
PC
- 6,908,245 reviews
Home
- 6,221,321 reviews
Apparel
- 5,906,085 reviews
Health & Personal Care
- 5,331,239 reviews
Beauty
- 5,115,462 reviews
Video DVD
- 5,069,014 reviews
Mobile_Apps
- 5,033,164 reviews
It's interesting to note that Books and Digital Ebook Purchase (which are related categories) together account for over 37 million reviews, showing the strong popularity of reading materials on Amazon's platform.
``` | {"source_file": "dspy.md"} | [
0.032851122319698334,
-0.06010231748223305,
-0.07954613119363785,
0.02519938535988331,
-0.027656443417072296,
0.07771214842796326,
0.010477225296199322,
0.03488505259156227,
0.07625182718038559,
0.058863960206508636,
0.03442646190524101,
0.019633324816823006,
0.09442716836929321,
-0.059225... |
30800d4b-9eea-480d-9466-8ef225607c4a | slug: /use-cases/AI/MCP/ai-agent-libraries/claude-agent-sdk
sidebar_label: 'Integrate Claude Agent SDK'
title: 'How to build an AI Agent with Claude Agent SDK and the ClickHouse MCP Server'
pagination_prev: null
pagination_next: null
description: 'Learn how build an AI Agent with Claude Agent SDK and the ClickHouse MCP Server'
keywords: ['ClickHouse', 'MCP', 'Claude']
show_related_blogs: true
doc_type: 'guide'
How to build an AI Agent with Claude Agent SDK and the ClickHouse MCP Server
In this guide you'll learn how to build a
Claude Agent SDK
AI agent that can interact with
ClickHouse's SQL playground
using
ClickHouse's MCP Server
.
:::note Example notebook
This example can be found as a notebook in the
examples repository
.
:::
Prerequisites {#prerequisites}
You'll need to have Python installed on your system.
You'll need to have
pip
installed on your system.
You'll need an Anthropic API key
You can run the following steps either from your Python REPL or via script.
Install libraries {#install-libraries}
Install the Claude Agent SDK library by running the following commands:
python
pip install -q --upgrade pip
pip install -q claude-agent-sdk
pip install -q ipywidgets
Setup credentials {#setup-credentials}
Next, you'll need to provide your Anthropic API key:
python
import os, getpass
os.environ["ANTHROPIC_API_KEY"] = getpass.getpass("Enter Anthropic API Key:")
response title="Response"
Enter Anthropic API Key: Β·Β·Β·Β·Β·Β·Β·Β·
Next, define the credentials needed to connect to the ClickHouse SQL playground:
python
env = {
"CLICKHOUSE_HOST": "sql-clickhouse.clickhouse.com",
"CLICKHOUSE_PORT": "8443",
"CLICKHOUSE_USER": "demo",
"CLICKHOUSE_PASSWORD": "",
"CLICKHOUSE_SECURE": "true"
}
Initialize MCP Server and Claude Agent SDK agent {#initialize-mcp-and-agent}
Now configure the ClickHouse MCP Server to point at the ClickHouse SQL playground
and also initialize our agent and ask it a question:
python
from claude_agent_sdk import query, ClaudeAgentOptions, AssistantMessage, UserMessage, TextBlock, ToolUseBlock
```python
options = ClaudeAgentOptions(
allowed_tools=[
"mcp__mcp-clickhouse__list_databases",
"mcp__mcp-clickhouse__list_tables",
"mcp__mcp-clickhouse__run_select_query",
"mcp__mcp-clickhouse__run_chdb_select_query"
],
mcp_servers={
"mcp-clickhouse": {
"command": "uv",
"args": [
"run",
"--with", "mcp-clickhouse",
"--python", "3.10",
"mcp-clickhouse"
],
"env": env
}
}
) | {"source_file": "claude-agent-sdk.md"} | [
-0.028709841892123222,
-0.06424254924058914,
-0.07267594337463379,
-0.023375332355499268,
-0.04182358831167221,
0.009774389676749706,
-0.00047871278366073966,
0.015279725193977356,
-0.07457119226455688,
0.025423437356948853,
0.01892412081360817,
-0.024944592267274857,
0.09681835770606995,
... |
ddc0886b-8357-4411-b540-6ab049270fb1 | async for message in query(prompt="Tell me something interesting about UK property sales", options=options):
if isinstance(message, AssistantMessage):
for block in message.content:
if isinstance(block, TextBlock):
print(f"π€ {block.text}")
if isinstance(block, ToolUseBlock):
print(f"π οΈ {block.name} {block.input}")
elif isinstance(message, UserMessage):
for block in message.content:
if isinstance(block, TextBlock):
print(block.text)
```
Note the code inside the
for
block is filtering the output for brevity.
```response title="Response"
π€ I'll query the ClickHouse database to find something interesting about UK property sales.
Let me first see what databases are available:
π οΈ mcp__mcp-clickhouse__list_databases {}
π€ Great! There's a "uk" database. Let me see what tables are available:
π οΈ mcp__mcp-clickhouse__list_tables {'database': 'uk'}
π€ Perfect! The
uk_price_paid
table has over 30 million property sales records. Let me find something interesting:
π οΈ mcp__mcp-clickhouse__run_select_query {'query': "\nSELECT \n street,\n town,\n max(price) as max_price,\n min(price) as min_price,\n max(price) - min(price) as price_difference,\n count() as sales_count\nFROM uk.uk_price_paid\nWHERE street != ''\nGROUP BY street, town\nHAVING sales_count > 100\nORDER BY price_difference DESC\nLIMIT 1\n"}
π€ Here's something fascinating:
Baker Street in London
(yes, the famous Sherlock Holmes street!) has the largest price range of any street with over 100 sales - properties sold for as low as
Β£2,500
and as high as
Β£594.3 million
, a staggering difference of over Β£594 million!
This makes sense given Baker Street is one of London's most prestigious addresses, running through wealthy areas like Marylebone, and has had 541 recorded sales in this dataset.
``` | {"source_file": "claude-agent-sdk.md"} | [
0.00086041702888906,
-0.015639329329133034,
-0.020177505910396576,
0.05669771879911423,
-0.016713222488760948,
-0.076597198843956,
0.1283557415008545,
-0.046294208616018295,
-0.04306114837527275,
-0.05281739681959152,
0.05672657489776611,
-0.09428296983242035,
0.049464140087366104,
-0.0668... |
13473187-d6f7-4528-bf57-265293212457 | slug: /use-cases/AI/MCP/ai-agent-libraries/mcp-agent
sidebar_label: 'Integrate mcp-agent'
title: 'How to build an AI Agent with mcp-agent and the ClickHouse MCP Server'
pagination_prev: null
pagination_next: null
description: 'Learn how build an AI Agent with mcp-agent and the ClickHouse MCP Server'
keywords: ['ClickHouse', 'MCP', 'mcp-agent']
show_related_blogs: true
doc_type: 'guide'
How to build an AI Agent with CrewAI and the ClickHouse MCP Server
In this guide you'll learn how to build a
mcp-agent
AI agent that can interact with
ClickHouse's SQL playground
using
ClickHouse's MCP Server
.
:::note Example notebook
This example can be found as a notebook in the
examples repository
.
:::
Prerequisites {#prerequisites}
You'll need to have Python installed on your system.
You'll need to have
pip
installed on your system.
You'll need an OpenAI API key
You can run the following steps either from your Python REPL or via script.
Install libraries {#install-libraries}
Install the mcp-agent library by running the following commands:
python
pip install -q --upgrade pip
pip install -q mcp-agent openai
pip install -q ipywidgets
Setup credentials {#setup-credentials}
Next, you'll need to provide your OpenAI API key:
python
import os, getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter OpenAI API Key:")
response title="Response"
Enter OpenAI API Key: Β·Β·Β·Β·Β·Β·Β·Β·
Next, define the credentials needed to connect to the ClickHouse SQL playground:
python
env = {
"CLICKHOUSE_HOST": "sql-clickhouse.clickhouse.com",
"CLICKHOUSE_PORT": "8443",
"CLICKHOUSE_USER": "demo",
"CLICKHOUSE_PASSWORD": "",
"CLICKHOUSE_SECURE": "true"
}
Initialize MCP Server and mcp-agent agent {#initialize-mcp-and-agent}
Now configure the ClickHouse MCP Server to point at the ClickHouse SQL playground
and also initialize our agent and ask it a question:
python
from mcp_agent.app import MCPApp
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
from mcp_agent.config import Settings, MCPSettings, MCPServerSettings, OpenAISettings
```python
settings = Settings(
execution_engine="asyncio",
openai=OpenAISettings(
default_model="gpt-5-mini-2025-08-07",
),
mcp=MCPSettings(
servers={
"clickhouse": MCPServerSettings(
command='uv',
args=[
"run",
"--with", "mcp-clickhouse",
"--python", "3.10",
"mcp-clickhouse"
],
env=env
),
}
),
)
app = MCPApp(name="mcp_basic_agent", settings=settings)
async with app.run() as mcp_agent_app:
logger = mcp_agent_app.logger
data_agent = Agent(
name="database-anayst",
instruction="""You can answer questions with help from a ClickHouse database.""",
server_names=["clickhouse"],
) | {"source_file": "mcp-agent.md"} | [
-0.01490349043160677,
-0.09618127346038818,
-0.05158887803554535,
0.0025609959848225117,
-0.025791138410568237,
0.011759653687477112,
-0.0021475357934832573,
-0.0077585927210748196,
-0.07137900590896606,
0.017471781000494957,
0.03857909142971039,
-0.025906851515173912,
0.09410910308361053,
... |
115a2951-e316-4110-ba87-010f8537f39f | async with data_agent:
llm = await data_agent.attach_llm(OpenAIAugmentedLLM)
result = await llm.generate_str(
message="Tell me about UK property prices in 2025. Use ClickHouse to work it out."
)
logger.info(result)
``` | {"source_file": "mcp-agent.md"} | [
0.05501699075102806,
0.016152631491422653,
-0.08606882393360138,
0.1138482317328453,
-0.010932136327028275,
-0.07586335390806198,
-0.012343162670731544,
0.00034013224649243057,
-0.024461057037115097,
-0.013203158043324947,
0.028270775452256203,
-0.12049354612827301,
0.0013225857401266694,
... |
9a70777c-e19c-443e-906a-5b66d0f03a1e | ```response title="Response"
[10/10/25 11:26:20] INFO Starting MCP server 'mcp-clickhouse' with transport 'stdio' server.py:1502
2025-10-10 11:26:20,183 - mcp.server.lowlevel.server - INFO - Processing request of type ListToolsRequest
2025-10-10 11:26:20,184 - mcp.server.lowlevel.server - INFO - Processing request of type ListPromptsRequest
2025-10-10 11:26:20,185 - mcp.server.lowlevel.server - INFO - Processing request of type ListResourcesRequest
[INFO] 2025-10-10T11:26:20 mcp_agent.workflows.llm.augmented_llm_openai.database-anayst - Using reasoning model 'gpt-5-mini-2025-08-07' with
'medium' reasoning effort
[INFO] 2025-10-10T11:26:23 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "list_databases",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
2025-10-10 11:26:23,477 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:23,479 - mcp-clickhouse - INFO - Listing all databases
2025-10-10 11:26:23,479 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:24,375 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:24,551 - mcp-clickhouse - INFO - Found 38 databases
[INFO] 2025-10-10T11:26:26 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "list_tables",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
2025-10-10 11:26:26,825 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:26,832 - mcp-clickhouse - INFO - Listing tables in database 'uk'
2025-10-10 11:26:26,832 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:27,311 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:28,738 - mcp-clickhouse - INFO - Found 9 tables
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "run_select_query",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "run_select_query",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool", | {"source_file": "mcp-agent.md"} | [
-0.01530083641409874,
-0.05469586327672005,
-0.022710133343935013,
0.01599741168320179,
0.07622403651475906,
-0.12268955260515213,
0.06517673283815384,
0.06879521161317825,
-0.04949362576007843,
0.049648791551589966,
-0.011635066010057926,
-0.041462309658527374,
0.0076317209750413895,
0.01... |
8cc4297d-8f36-4132-bd09-75fab33c6fd9 | "agent_name": "database-anayst"
}
}
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "run_select_query",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "run_select_query",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "run_select_query",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
2025-10-10 11:26:48,366 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:48,367 - mcp-clickhouse - INFO - Executing SELECT query: SELECT
count(
) AS transactions,
avg(price) AS avg_price,
quantileExact(0.5)(price) AS median_price,
min(price) AS min_price,
max(price) AS max_price
FROM uk.uk_price_paid_simple_partitioned
WHERE toYear(date)=2025
2025-10-10 11:26:48,367 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:49,262 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:49,407 - mcp-clickhouse - INFO - Query returned 1 rows
2025-10-10 11:26:49,408 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:49,408 - mcp-clickhouse - INFO - Executing SELECT query: SELECT toMonth(date) AS month, count(
) AS transactions, avg(price) AS avg_price, quantileExact(0.5)(price) AS median_price
FROM uk.uk_price_paid_simple_partitioned
WHERE toYear(date)=2025
GROUP BY month
ORDER BY month
2025-10-10 11:26:49,408 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:49,857 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:50,067 - mcp-clickhouse - INFO - Query returned 8 rows
2025-10-10 11:26:50,068 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:50,069 - mcp-clickhouse - INFO - Executing SELECT query: SELECT town, count(
) AS transactions, avg(price) AS avg_price
FROM uk.uk_price_paid_simple_partitioned
WHERE toYear(date)=2025
GROUP BY town
HAVING transactions >= 50
ORDER BY avg_price DESC
LIMIT 10
2025-10-10 11:26:50,069 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s) | {"source_file": "mcp-agent.md"} | [
0.010896580293774605,
-0.0316503532230854,
-0.06405030936002731,
0.03293168544769287,
-0.016925973817706108,
-0.0655282586812973,
0.05739300325512886,
-0.005515580996870995,
-0.032914068549871445,
0.047285664826631546,
0.029024777933955193,
-0.07489664852619171,
0.017741555348038673,
-0.03... |
19f16bae-6579-4953-8f50-5837887ed00d | 2025-10-10 11:26:50,594 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:50,741 - mcp-clickhouse - INFO - Query returned 10 rows
2025-10-10 11:26:50,744 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:50,746 - mcp-clickhouse - INFO - Executing SELECT query: SELECT toYear(date) AS year, count(
) AS transactions, avg(price) AS avg_price, quantileExact(0.5)(price) AS median_price
FROM uk.uk_price_paid_simple_partitioned
WHERE toYear(date) IN (2024,2025)
GROUP BY year
ORDER BY year
2025-10-10 11:26:50,747 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:51,256 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:51,447 - mcp-clickhouse - INFO - Query returned 2 rows
2025-10-10 11:26:51,449 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:51,452 - mcp-clickhouse - INFO - Executing SELECT query: SELECT type, count(*) AS transactions, avg(price) AS avg_price, quantileExact(0.5)(price) AS median_price
FROM uk.uk_price_paid
WHERE toYear(date)=2025
GROUP BY type
ORDER BY avg_price DESC
2025-10-10 11:26:51,452 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:51,952 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:52,166 - mcp-clickhouse - INFO - Query returned 5 rows
[INFO] 2025-10-10T11:27:51 mcp_agent.mcp_basic_agent - Summary (TL;DR)
- Based on the UK Price Paid tables in ClickHouse, for transactions recorded in 2025 so far there are 376,633 sales with an average price of
Β£362,283 and a median price of Β£281,000. The data appears to include only months JanβAug 2025 (so 2025 is incomplete). There are extreme
outliers (min Β£100, max Β£127,700,000) that skew the mean. | {"source_file": "mcp-agent.md"} | [
0.02098892256617546,
-0.05605744197964668,
0.04468369483947754,
0.05720396339893341,
-0.14170600473880768,
-0.03264566510915756,
0.014588876627385616,
-0.03173636645078659,
-0.013669929467141628,
-0.004869587253779173,
0.025825249031186104,
-0.06611093133687973,
0.03788750246167183,
-0.026... |
d22adf49-9d98-40e1-a898-1f6925108716 | What I computed (how)
I ran aggregations on the uk.price-paid tables in ClickHouse:
- overall 2025 summary (count, mean, median, min, max) from uk.uk_price_paid_simple_partitioned
- monthly breakdown for 2025 (transactions, mean, median)
- top towns in 2025 by average price (towns with >= 50 transactions)
- year comparison: 2024 vs 2025 (count, mean, median)
- breakdown by property type for 2025 (counts, avg, median) using uk.uk_price_paid
Key numbers (from the dataset)
- Overall 2025 (recorded transactions): transactions = 376,633; mean price = Β£362,282.66; median price = Β£281,000; min = Β£100; max =
Β£127,700,000.
- By month (2025): (month, transactions, mean price, median price)
- Jan: 53,927, mean Β£386,053, median Β£285,000
- Feb: 58,740, mean Β£371,803, median Β£285,000
- Mar: 95,274, mean Β£377,200, median Β£315,000
- Apr: 24,987, mean Β£331,692, median Β£235,000
- May: 39,013, mean Β£342,380, median Β£255,000
- Jun: 41,446, mean Β£334,667, median Β£268,500
- Jul: 44,431, mean Β£348,293, median Β£277,500
- Aug: 18,815, mean Β£364,653, median Β£292,999
(Only months 1β8 are present in the dataset.)
- Top towns by average price (2025, towns with β₯50 transactions)
- TRING: 126 txns, avg Β£1,973,274
- BUCKHURST HILL: 98 txns, avg Β£1,441,331
- ASCOT: 175 txns, avg Β£1,300,748
- RADLETT: 69 txns, avg Β£1,160,217
- COBHAM: 115 txns, avg Β£1,035,192
- EAST MOLESEY, BEACONSFIELD, ESHER, CHALFONT ST GILES, THAMES DITTON are also in the top 10 (all high-average commuter/affluent towns).
- Year comparison (2024 vs 2025 as recorded)
- 2024: 859,960 transactions, mean Β£390,879, median Β£280,000
- 2025: 376,633 transactions, mean Β£362,283, median Β£281,000
(2025 counts are much lower because the dataset only includes part of the year.)
- By property type (2025)
- detached: 85,362 txns, avg Β£495,714, median Β£415,000
- semi-detached: 107,580 txns, avg Β£319,922, median Β£270,000
- flat: 62,975 txns, avg Β£298,529, median Β£227,000
- terraced: 112,832 txns, avg Β£286,616, median Β£227,000
- other: 7,884 txns, avg Β£1,087,765 (median Β£315,000) β note small-group and outlier effect
Important caveats and data quality notes
- The dataset appears partial for 2025 (only months JanβAug present). Any β2025β totals are not full-year figures.
- Large outliers exist (e.g., max Β£127.7M, and min Β£100). These likely include data-entry errors or non-standard records and inflate the
mean. Median is often a more robust measure here.
- βotherβ property-type averages are unstable due to low/heterogeneous counts and outliers.
- I did not filter by is_new, duration, or other metadata; those filters can change results (for example excluding new-builds or
leaseholds).
- The tables are Price Paid-style transaction records (recorded sales) β they do not directly represent asking prices or valuations. | {"source_file": "mcp-agent.md"} | [
0.05535942688584328,
-0.05599791556596756,
0.033650316298007965,
0.030033757910132408,
-0.06221238151192665,
-0.04388762637972832,
-0.03973851352930069,
0.015244279988110065,
-0.009228356182575226,
0.013324780389666557,
0.026651546359062195,
-0.09192506223917007,
0.029725991189479828,
-0.0... |
07fe4cbd-1c01-4f8f-ac53-6f1f9c88dde6 | Suggested next steps (I can run these)
- Clean out obvious outliers (e.g., prices < Β£10k or > Β£10M) and recompute averages/medians.
- Produce regional / county / postcode-area summaries and maps.
- Compute month-on-month or rolling 3-month median to show trend through 2025.
- Produce year-on-year (YoY) growth rates by month (e.g., Mar 2025 vs Mar 2024).
- Forecast for full 2025 using simple extrapolation or time-series modelling (but better after deciding how to handle missing
months/outliers).
If you want, I can:
- Re-run the same aggregations after removing extreme outliers and show cleaned results.
- Produce YoY monthly growth and charts (I can return CSV or JSON aggregates you can chart).
Which would you like me to do next?
[INFO] 2025-10-10T11:27:51 mcp_agent.mcp.mcp_aggregator.database-anayst - Last aggregator closing, shutting down all persistent
connections...
[INFO] 2025-10-10T11:27:51 mcp_agent.mcp.mcp_connection_manager - Disconnecting all persistent server connections...
[INFO] 2025-10-10T11:27:51 mcp_agent.mcp.mcp_connection_manager - clickhouse: Requesting shutdown...
[INFO] 2025-10-10T11:27:51 mcp_agent.mcp.mcp_connection_manager - All persistent server connections signaled to disconnect.
[INFO] 2025-10-10T11:27:52 mcp_agent.mcp.mcp_aggregator.database-anayst - Connection manager successfully closed and removed from context
[INFO] 2025-10-10T11:27:52 mcp_agent.mcp_basic_agent - MCPApp cleanup
{
"data": {
"progress_action": "Finished",
"target": "mcp_basic_agent",
"agent_name": "mcp_application_loop"
}
}
``` | {"source_file": "mcp-agent.md"} | [
0.000485801458125934,
-0.0416121743619442,
0.06029636040329933,
0.08472461253404617,
-0.028472905978560448,
-0.05486725643277168,
-0.11984898149967194,
-0.017046498134732246,
-0.09292662143707275,
0.0074259014800190926,
0.0334760956466198,
-0.05042845383286476,
0.03547576442360878,
-0.0414... |
45cf3ab8-fd5d-4da3-8247-ee322cb8de0d | slug: /use-cases/AI/MCP/ai-agent-libraries/upsonic
sidebar_label: 'Integrate Upsonic'
title: 'How to build an AI Agent with Upsonic and the ClickHouse MCP Server'
pagination_prev: null
pagination_next: null
description: 'Learn how build an AI Agent with Upsonic and the ClickHouse MCP Server'
keywords: ['ClickHouse', 'MCP', 'Upsonic']
show_related_blogs: true
doc_type: 'guide'
How to build an AI Agent with Upsonic and the ClickHouse MCP Server
In this guide you'll learn how to build a
Upsonic
AI agent that can interact with
ClickHouse's SQL playground
using
ClickHouse's MCP Server
.
:::note Example notebook
This example can be found as a notebook in the
examples repository
.
:::
Prerequisites {#prerequisites}
You'll need to have Python installed on your system.
You'll need to have
pip
installed on your system.
You'll need an OpenAI API key
You can run the following steps either from your Python REPL or via script.
Install libraries {#install-libraries}
Install the mcp-agent library by running the following commands:
python
pip install -q --upgrade pip
pip install -q "upsonic[loaders,tools]" openai
pip install -q ipywidgets
Setup credentials {#setup-credentials}
Next, you'll need to provide your OpenAI API key:
python
import os, getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter OpenAI API Key:")
response title="Response"
Enter OpenAI API Key: Β·Β·Β·Β·Β·Β·Β·Β·
Next, define the credentials needed to connect to the ClickHouse SQL playground:
python
env = {
"CLICKHOUSE_HOST": "sql-clickhouse.clickhouse.com",
"CLICKHOUSE_PORT": "8443",
"CLICKHOUSE_USER": "demo",
"CLICKHOUSE_PASSWORD": "",
"CLICKHOUSE_SECURE": "true"
}
Initialize MCP Server and Upsonic agent {#initialize-mcp-and-agent}
Now configure the ClickHouse MCP Server to point at the ClickHouse SQL playground
and also initialize our agent and ask it a question:
python
from upsonic import Agent, Task
from upsonic.models.openai import OpenAIResponsesModel
```python
class DatabaseMCP:
"""
MCP server for ClickHouse database operations.
Provides tools for querying tables and databases
"""
command="uv"
args=[
"run",
"--with",
"mcp-clickhouse",
"--python",
"3.10",
"mcp-clickhouse"
]
env=env
database_agent = Agent(
name="Data Analyst",
role="ClickHouse specialist.",
goal="Query ClickHouse database and tables and answer questions",
model=OpenAIResponsesModel(model_name="gpt-5-mini-2025-08-07")
)
task = Task(
description="Tell me what happened in the UK property market in the 2020s. Use ClickHouse.",
tools=[DatabaseMCP]
)
Execute the workflow
workflow_result = database_agent.do(task)
print("\nMulti-MCP Workflow Result:")
print(workflow_result)
``` | {"source_file": "upsonic.md"} | [
-0.08317920565605164,
-0.10290072858333588,
-0.07767906785011292,
-0.01901070587337017,
-0.07323648780584335,
0.012652195058763027,
0.01928040012717247,
0.007235283497720957,
-0.08761242032051086,
0.05260670930147171,
0.05229644849896431,
0.002348981099203229,
0.10189332067966461,
-0.00183... |
bfa4b3cf-d4a2-4a17-a528-72dbfa9308f3 | Execute the workflow
workflow_result = database_agent.do(task)
print("\nMulti-MCP Workflow Result:")
print(workflow_result)
```
```response title="Response"
2025-10-10 11:26:12,758 - mcp.server.lowlevel.server - INFO - Processing request of type ListToolsRequest
Found 3 tools from DatabaseMCP
- list_databases: List available ClickHouse databases
- list_tables: List available ClickHouse tables in a database, including schema, comment,
row count, and column count.
- run_select_query: Run a SELECT query in a ClickHouse database
β
MCP tools discovered via thread
... | {"source_file": "upsonic.md"} | [
0.013472522608935833,
-0.03476836159825325,
-0.02491234801709652,
0.07257863134145737,
-0.07157420367002487,
-0.07147735357284546,
0.04717075079679489,
0.0030976939015090466,
-0.05441105365753174,
-0.00003695399936987087,
-0.018678812310099602,
-0.04561878740787506,
0.06359264254570007,
-0... |
4e9df108-5258-4bf1-bac6-9a4f4c1bd6de | [10/10/25 11:26:20] INFO Starting MCP server 'mcp-clickhouse' with transport 'stdio' server.py:1502
2025-10-10 11:26:20,183 - mcp.server.lowlevel.server - INFO - Processing request of type ListToolsRequest
2025-10-10 11:26:20,184 - mcp.server.lowlevel.server - INFO - Processing request of type ListPromptsRequest
2025-10-10 11:26:20,185 - mcp.server.lowlevel.server - INFO - Processing request of type ListResourcesRequest
[INFO] 2025-10-10T11:26:20 mcp_agent.workflows.llm.augmented_llm_openai.database-anayst - Using reasoning model 'gpt-5-mini-2025-08-07' with
'medium' reasoning effort
[INFO] 2025-10-10T11:26:23 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "list_databases",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
2025-10-10 11:26:23,477 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:23,479 - mcp-clickhouse - INFO - Listing all databases
2025-10-10 11:26:23,479 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:24,375 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:24,551 - mcp-clickhouse - INFO - Found 38 databases
[INFO] 2025-10-10T11:26:26 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "list_tables",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
2025-10-10 11:26:26,825 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:26,832 - mcp-clickhouse - INFO - Listing tables in database 'uk'
2025-10-10 11:26:26,832 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:27,311 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:28,738 - mcp-clickhouse - INFO - Found 9 tables
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "run_select_query",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "run_select_query",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "run_select_query", | {"source_file": "upsonic.md"} | [
-0.030485017225146294,
-0.0510660819709301,
-0.029176397249102592,
0.018770208582282066,
0.07206548005342484,
-0.13284802436828613,
0.06196705251932144,
0.06113779544830322,
-0.04684675484895706,
0.05322412773966789,
-0.011389410123229027,
-0.043042924255132675,
0.0008328110561706126,
0.01... |
e11b31f2-d9d1-4605-86f7-bcece94a2bc4 | }
}
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "run_select_query",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "run_select_query",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
[INFO] 2025-10-10T11:26:48 mcp_agent.mcp.mcp_aggregator.database-anayst - Requesting tool call
{
"data": {
"progress_action": "Calling Tool",
"tool_name": "run_select_query",
"server_name": "clickhouse",
"agent_name": "database-anayst"
}
}
2025-10-10 11:26:48,366 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:48,367 - mcp-clickhouse - INFO - Executing SELECT query: SELECT
count(
) AS transactions,
avg(price) AS avg_price,
quantileExact(0.5)(price) AS median_price,
min(price) AS min_price,
max(price) AS max_price
FROM uk.uk_price_paid_simple_partitioned
WHERE toYear(date)=2025
2025-10-10 11:26:48,367 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:49,262 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:49,407 - mcp-clickhouse - INFO - Query returned 1 rows
2025-10-10 11:26:49,408 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:49,408 - mcp-clickhouse - INFO - Executing SELECT query: SELECT toMonth(date) AS month, count(
) AS transactions, avg(price) AS avg_price, quantileExact(0.5)(price) AS median_price
FROM uk.uk_price_paid_simple_partitioned
WHERE toYear(date)=2025
GROUP BY month
ORDER BY month
2025-10-10 11:26:49,408 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:49,857 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:50,067 - mcp-clickhouse - INFO - Query returned 8 rows
2025-10-10 11:26:50,068 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:50,069 - mcp-clickhouse - INFO - Executing SELECT query: SELECT town, count(
) AS transactions, avg(price) AS avg_price
FROM uk.uk_price_paid_simple_partitioned
WHERE toYear(date)=2025
GROUP BY town
HAVING transactions >= 50
ORDER BY avg_price DESC
LIMIT 10
2025-10-10 11:26:50,069 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s) | {"source_file": "upsonic.md"} | [
-0.00009548664093017578,
-0.025581365451216698,
-0.05854864791035652,
0.02387116104364395,
-0.019074199721217155,
-0.06394761800765991,
0.05949155613780022,
0.01164997462183237,
-0.024044862017035484,
0.05638710781931877,
0.029670117422938347,
-0.07766716927289963,
0.010166267864406109,
-0... |
e0aa0d5f-b5d7-49f5-b764-8015177e4de6 | 2025-10-10 11:26:50,594 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:50,741 - mcp-clickhouse - INFO - Query returned 10 rows
2025-10-10 11:26:50,744 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:50,746 - mcp-clickhouse - INFO - Executing SELECT query: SELECT toYear(date) AS year, count(
) AS transactions, avg(price) AS avg_price, quantileExact(0.5)(price) AS median_price
FROM uk.uk_price_paid_simple_partitioned
WHERE toYear(date) IN (2024,2025)
GROUP BY year
ORDER BY year
2025-10-10 11:26:50,747 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:51,256 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:51,447 - mcp-clickhouse - INFO - Query returned 2 rows
2025-10-10 11:26:51,449 - mcp.server.lowlevel.server - INFO - Processing request of type CallToolRequest
2025-10-10 11:26:51,452 - mcp-clickhouse - INFO - Executing SELECT query: SELECT type, count(*) AS transactions, avg(price) AS avg_price, quantileExact(0.5)(price) AS median_price
FROM uk.uk_price_paid
WHERE toYear(date)=2025
GROUP BY type
ORDER BY avg_price DESC
2025-10-10 11:26:51,452 - mcp-clickhouse - INFO - Creating ClickHouse client connection to sql-clickhouse.clickhouse.com:8443 as demo (secure=True, verify=True, connect_timeout=30s, send_receive_timeout=30s)
2025-10-10 11:26:51,952 - mcp-clickhouse - INFO - Successfully connected to ClickHouse server version 25.8.1.8344
2025-10-10 11:26:52,166 - mcp-clickhouse - INFO - Query returned 5 rows
[INFO] 2025-10-10T11:27:51 mcp_agent.mcp_basic_agent - Summary (TL;DR)
- Based on the UK Price Paid tables in ClickHouse, for transactions recorded in 2025 so far there are 376,633 sales with an average price of
Β£362,283 and a median price of Β£281,000. The data appears to include only months JanβAug 2025 (so 2025 is incomplete). There are extreme
outliers (min Β£100, max Β£127,700,000) that skew the mean. | {"source_file": "upsonic.md"} | [
0.02098892256617546,
-0.05605744197964668,
0.04468369483947754,
0.05720396339893341,
-0.14170600473880768,
-0.03264566510915756,
0.014588876627385616,
-0.03173636645078659,
-0.013669929467141628,
-0.004869587253779173,
0.025825249031186104,
-0.06611093133687973,
0.03788750246167183,
-0.026... |
2cd28c52-e374-4180-9131-5c02aa69a611 | What I computed (how)
I ran aggregations on the uk.price-paid tables in ClickHouse:
- overall 2025 summary (count, mean, median, min, max) from uk.uk_price_paid_simple_partitioned
- monthly breakdown for 2025 (transactions, mean, median)
- top towns in 2025 by average price (towns with >= 50 transactions)
- year comparison: 2024 vs 2025 (count, mean, median)
- breakdown by property type for 2025 (counts, avg, median) using uk.uk_price_paid
Key numbers (from the dataset)
- Overall 2025 (recorded transactions): transactions = 376,633; mean price = Β£362,282.66; median price = Β£281,000; min = Β£100; max =
Β£127,700,000.
- By month (2025): (month, transactions, mean price, median price)
- Jan: 53,927, mean Β£386,053, median Β£285,000
- Feb: 58,740, mean Β£371,803, median Β£285,000
- Mar: 95,274, mean Β£377,200, median Β£315,000
- Apr: 24,987, mean Β£331,692, median Β£235,000
- May: 39,013, mean Β£342,380, median Β£255,000
- Jun: 41,446, mean Β£334,667, median Β£268,500
- Jul: 44,431, mean Β£348,293, median Β£277,500
- Aug: 18,815, mean Β£364,653, median Β£292,999
(Only months 1β8 are present in the dataset.)
- Top towns by average price (2025, towns with β₯50 transactions)
- TRING: 126 txns, avg Β£1,973,274
- BUCKHURST HILL: 98 txns, avg Β£1,441,331
- ASCOT: 175 txns, avg Β£1,300,748
- RADLETT: 69 txns, avg Β£1,160,217
- COBHAM: 115 txns, avg Β£1,035,192
- EAST MOLESEY, BEACONSFIELD, ESHER, CHALFONT ST GILES, THAMES DITTON are also in the top 10 (all high-average commuter/affluent towns).
- Year comparison (2024 vs 2025 as recorded)
- 2024: 859,960 transactions, mean Β£390,879, median Β£280,000
- 2025: 376,633 transactions, mean Β£362,283, median Β£281,000
(2025 counts are much lower because the dataset only includes part of the year.)
- By property type (2025)
- detached: 85,362 txns, avg Β£495,714, median Β£415,000
- semi-detached: 107,580 txns, avg Β£319,922, median Β£270,000
- flat: 62,975 txns, avg Β£298,529, median Β£227,000
- terraced: 112,832 txns, avg Β£286,616, median Β£227,000
- other: 7,884 txns, avg Β£1,087,765 (median Β£315,000) β note small-group and outlier effect
Important caveats and data quality notes
- The dataset appears partial for 2025 (only months JanβAug present). Any β2025β totals are not full-year figures.
- Large outliers exist (e.g., max Β£127.7M, and min Β£100). These likely include data-entry errors or non-standard records and inflate the
mean. Median is often a more robust measure here.
- βotherβ property-type averages are unstable due to low/heterogeneous counts and outliers.
- I did not filter by is_new, duration, or other metadata; those filters can change results (for example excluding new-builds or
leaseholds).
- The tables are Price Paid-style transaction records (recorded sales) β they do not directly represent asking prices or valuations. | {"source_file": "upsonic.md"} | [
0.05535942688584328,
-0.05599791556596756,
0.033650316298007965,
0.030033757910132408,
-0.06221238151192665,
-0.04388762637972832,
-0.03973851352930069,
0.015244279988110065,
-0.009228356182575226,
0.013324780389666557,
0.026651546359062195,
-0.09192506223917007,
0.029725991189479828,
-0.0... |
6c947357-4c3e-40ca-a7a5-ce35b0cf5908 | Suggested next steps (I can run these)
- Clean out obvious outliers (e.g., prices < Β£10k or > Β£10M) and recompute averages/medians.
- Produce regional / county / postcode-area summaries and maps.
- Compute month-on-month or rolling 3-month median to show trend through 2025.
- Produce year-on-year (YoY) growth rates by month (e.g., Mar 2025 vs Mar 2024).
- Forecast for full 2025 using simple extrapolation or time-series modelling (but better after deciding how to handle missing
months/outliers).
If you want, I can:
- Re-run the same aggregations after removing extreme outliers and show cleaned results.
- Produce YoY monthly growth and charts (I can return CSV or JSON aggregates you can chart).
Which would you like me to do next?
[INFO] 2025-10-10T11:27:51 mcp_agent.mcp.mcp_aggregator.database-anayst - Last aggregator closing, shutting down all persistent
connections...
[INFO] 2025-10-10T11:27:51 mcp_agent.mcp.mcp_connection_manager - Disconnecting all persistent server connections...
[INFO] 2025-10-10T11:27:51 mcp_agent.mcp.mcp_connection_manager - clickhouse: Requesting shutdown...
[INFO] 2025-10-10T11:27:51 mcp_agent.mcp.mcp_connection_manager - All persistent server connections signaled to disconnect.
[INFO] 2025-10-10T11:27:52 mcp_agent.mcp.mcp_aggregator.database-anayst - Connection manager successfully closed and removed from context
[INFO] 2025-10-10T11:27:52 mcp_agent.mcp_basic_agent - MCPApp cleanup
{
"data": {
"progress_action": "Finished",
"target": "mcp_basic_agent",
"agent_name": "mcp_application_loop"
}
}
``` | {"source_file": "upsonic.md"} | [
0.000485801458125934,
-0.0416121743619442,
0.06029636040329933,
0.08472461253404617,
-0.028472905978560448,
-0.05486725643277168,
-0.11984898149967194,
-0.017046498134732246,
-0.09292662143707275,
0.0074259014800190926,
0.0334760956466198,
-0.05042845383286476,
0.03547576442360878,
-0.0414... |
512ec1c1-a33e-459e-92ef-1e553d21a279 | slug: /use-cases/AI/MCP/ai-agent-libraries/pydantic-ai
sidebar_label: 'Integrate PydanticAI'
title: 'How to build a PydanticAI agent using ClickHouse MCP Server.'
pagination_prev: null
pagination_next: null
description: 'Learn how to build a PydanticAI agent that can interact with ClickHouse MCP Server.'
keywords: ['ClickHouse', 'MCP', 'PydanticAI']
show_related_blogs: true
doc_type: 'guide'
How to build a PydanticAI agent using ClickHouse MCP Server
In this guide, you'll learn how to build a
PydanticAI
agent that
can interact with
ClickHouse's SQL playground
using
ClickHouse's MCP Server
.
:::note Example notebook
This example can be found as a notebook in the
examples repository
.
:::
Prerequisites {#prerequisites}
You'll need to have Python installed on your system.
You'll need to have
pip
installed on your system.
You'll need an Anthropic API key, or API key from another LLM provider
You can run the following steps either from your Python REPL or via script.
Install libraries {#install-libraries}
Install the required library by running the following commands:
python
pip install -q --upgrade pip
pip install -q "pydantic-ai-slim[mcp]"
pip install -q "pydantic-ai-slim[anthropic]" # replace with the appropriate package if using a different LLM provider
Setup credentials {#setup-credentials}
Next, you'll need to provide your Anthropic API key:
python
import os, getpass
os.environ["ANTHROPIC_API_KEY"] = getpass.getpass("Enter Anthropic API Key:")
response title="Response"
Enter Anthropic API Key: Β·Β·Β·Β·Β·Β·Β·Β·
:::note Using another LLM provider
If you don't have an Anthropic API key, and want to use another LLM provider,
you can find the instructions for setting up your credentials in the
PydanticAI docs
:::
Next, define the credentials needed to connect to the ClickHouse SQL playground:
python
env = {
"CLICKHOUSE_HOST": "sql-clickhouse.clickhouse.com",
"CLICKHOUSE_PORT": "8443",
"CLICKHOUSE_USER": "demo",
"CLICKHOUSE_PASSWORD": "",
"CLICKHOUSE_SECURE": "true"
}
Initialize MCP Server and PydanticAI agent {#initialize-mcp}
Now configure the ClickHouse MCP Server to point at the ClickHouse SQL playground:
```python
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
from pydantic_ai.messages import ToolCallPart, ToolReturnPart
server = MCPServerStdio(
'uv',
args=[
'run',
'--with', 'mcp-clickhouse',
'--python', '3.13',
'mcp-clickhouse'
],
env=env
)
agent = Agent('anthropic:claude-sonnet-4-0', mcp_servers=[server])
```
Ask the agent a question {#ask-agent}
Finally, you can ask the agent a question:
python
async with agent.run_mcp_servers():
result = await agent.run("Who's done the most PRs for ClickHouse?")
print(result.output)
You'll get back a similar response as below: | {"source_file": "pydantic-ai.md"} | [
-0.020693114027380943,
-0.055462874472141266,
-0.05559935420751572,
-0.029749765992164612,
-0.06420361250638962,
-0.016569379717111588,
-0.017194978892803192,
0.008531194180250168,
-0.10183016210794449,
0.020890550687909126,
0.05861181765794754,
-0.026191875338554382,
0.09824976325035095,
... |
f9f0ba54-27cd-4929-8cbd-5c4f7c7d7c2b | python
async with agent.run_mcp_servers():
result = await agent.run("Who's done the most PRs for ClickHouse?")
print(result.output)
You'll get back a similar response as below:
```response title="Response"
Based on the data from the ClickHouse GitHub repository, here are the top contributors by number of pull requests created:
Top contributors to ClickHouse by PRs opened:
alexey-milovidov
- 3,370 PRs opened
azat
- 1,905 PRs opened
rschu1ze
- 979 PRs opened
alesapin
- 947 PRs opened
tavplubix
- 896 PRs opened
kssenii
- 871 PRs opened
Avogar
- 805 PRs opened
KochetovNicolai
- 700 PRs opened
Algunenano
- 658 PRs opened
kitaisreal
- 630 PRs opened
Alexey Milovidov
stands out as by far the most active contributor with over 3,370 pull requests opened, which is significantly more than any other contributor. This makes sense as Alexey Milovidov is one of the founders and lead developers of ClickHouse.
The data also shows that alexey-milovidov has been very active in managing PRs, with 12,818 "closed" events (likely reviewing and closing PRs from other contributors) in addition to creating his own PRs.
It's worth noting that I filtered out various robot/bot accounts that handle automated processes, focusing on human contributors to give you the most meaningful answer about who has contributed the most PRs to ClickHouse.
``` | {"source_file": "pydantic-ai.md"} | [
-0.022730672731995583,
-0.09515995532274246,
-0.12077535688877106,
0.03527600318193436,
-0.019399316981434822,
-0.0467924140393734,
0.022520286962389946,
0.02361607737839222,
0.02735542319715023,
0.01832551695406437,
-0.02871420606970787,
-0.010775011964142323,
-0.029917292296886444,
-0.06... |
1b28fb9b-64f8-4bf9-9942-05a7c2e2a937 | slug: /use-cases/AI/MCP/ai-agent-libraries/slackbot
sidebar_label: 'Integrate SlackBot'
title: 'How to build a SlackBot agent using ClickHouse MCP Server.'
pagination_prev: null
pagination_next: null
description: 'Learn how to build a SlackBot agent that can interact with ClickHouse MCP Server.'
keywords: ['ClickHouse', 'MCP', 'Slack', 'SlackBot', 'PydanticAI']
show_related_blogs: true
doc_type: 'guide'
How to build a SlackBot agent using ClickHouse MCP Server
In this guide, you'll learn how to build a
SlackBot
agent.
This bot allows you to ask questions about your ClickHouse data directly from Slack, using natural language. It uses the
ClickHouse MCP Server
and
PydanticAI
.
:::note Example project
The code for this example can be found in the
examples repository
.
:::
Prerequisites {#prerequisites}
You'll need to have
uv
installed
You'll need access to a Slack workspace
You'll need an Anthropic API key, or API key from another LLM provider
Create a Slack App {#create-a-slack-app}
Go to
slack.com/apps
and click
Create New App
.
Choose option
From scratch
and give your app a name.
Select your Slack workspace.
Install the app to your workspace {#install-the-app-to-your-workspace}
Next, you'll need to add the app created in the previous step to your workspace.
You can follow the instructions for
"Add apps to your Slack workspace"
in the Slack documentation.
Configure Slack app settings {#configure-slack-app-settings}
Go to
App Home
Under
Show Tabs
β
Messages Tab
: Enable
Allow users to send Slash commands and messages from the messages tab
Go to
Socket Mode
Enable
Socket Mode
Note down the
Socket Mode Handler
for the environment variable
SLACK_APP_TOKEN
Go to
OAuth & Permissions
Add the following
Bot Token Scopes
:
app_mentions:read
assistant:write
chat:write
im:history
im:read
im:write
channels:history
Install the app to your workspace and note down the
Bot User OAuth Token
for the environment variable
SLACK_BOT_TOKEN
.
Go to
Event Subscriptions
Enable
Events
Under
Subscribe to bot events
, add:
app_mention
assistant_thread_started
message:im
Save Changes
Add environment variables (
.env
) {#add-env-vars}
Create a
.env
file in the project root with the following environment variables
which will allow your app to connect to
ClickHouse's SQL playground
.
env
SLACK_BOT_TOKEN=your-slack-bot-token
SLACK_APP_TOKEN=your-slack-app-level-token
ANTHROPIC_API_KEY=your-anthropic-api-key
CLICKHOUSE_HOST=sql-clickhouse.clickhouse.com
CLICKHOUSE_PORT=8443
CLICKHOUSE_USER=demo
CLICKHOUSE_PASSWORD=
CLICKHOUSE_SECURE=true
You can adapt the ClickHouse variables to use your own ClickHouse server
or Cloud instance, if you would prefer.
Using the bot {#using-the-bot}
Start the bot:
sh
uv run main.py
2.
In Slack: | {"source_file": "slackbot.md"} | [
-0.032680343836545944,
-0.08894985914230347,
-0.03214053809642792,
-0.006714335642755032,
-0.0523320697247982,
-0.004784977994859219,
-0.025364121422171593,
0.04707162827253342,
-0.105304054915905,
0.03464940935373306,
0.05354880914092064,
-0.03675662726163864,
0.08367356657981873,
0.02324... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.