id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
39fb8277-948e-4ce2-8463-05761924abca | JSON for APIs
json_result = chdb.query('SELECT version()', 'JSON')
print(json_result)
Pretty format for debugging
pretty_result = chdb.query('SELECT * FROM system.numbers LIMIT 3', 'Pretty')
print(pretty_result)
```
DataFrame operations {#dataframe-operations}
Legacy DataFrame API {#legacy-dataframe-api}
```python
import chdb.dataframe as cdf
import pandas as pd
Join multiple DataFrames
df1 = pd.DataFrame({'a': [1, 2, 3], 'b': ["one", "two", "three"]})
df2 = pd.DataFrame({'c': [1, 2, 3], 'd': ["①", "②", "③"]})
result_df = cdf.query(
sql="SELECT * FROM
tbl1
t1 JOIN
tbl2
t2 ON t1.a = t2.c",
tbl1=df1,
tbl2=df2
)
print(result_df)
Query the result DataFrame
summary = result_df.query('SELECT b, sum(a) FROM
table
GROUP BY b')
print(summary)
```
Python table engine (recommended) {#python-table-engine-recommended}
```python
import chdb
import pandas as pd
import pyarrow as pa
Query Pandas DataFrame directly
df = pd.DataFrame({
"customer_id": [1, 2, 3, 1, 2],
"product": ["A", "B", "A", "C", "A"],
"amount": [100, 200, 150, 300, 250],
"metadata": [
{'category': 'electronics', 'priority': 'high'},
{'category': 'books', 'priority': 'low'},
{'category': 'electronics', 'priority': 'medium'},
{'category': 'clothing', 'priority': 'high'},
{'category': 'books', 'priority': 'low'}
]
})
Direct DataFrame querying with JSON support
result = chdb.query("""
SELECT
customer_id,
sum(amount) as total_spent,
toString(metadata.category) as category
FROM Python(df)
WHERE toString(metadata.priority) = 'high'
GROUP BY customer_id, toString(metadata.category)
ORDER BY total_spent DESC
""").show()
Query Arrow Table
arrow_table = pa.table({
"id": [1, 2, 3, 4],
"name": ["Alice", "Bob", "Charlie", "David"],
"score": [98, 89, 86, 95]
})
chdb.query("""
SELECT name, score
FROM Python(arrow_table)
ORDER BY score DESC
""").show()
```
Stateful sessions {#stateful-sessions}
Sessions maintain query state across multiple operations, enabling complex workflows:
```python
from chdb import session
Temporary session (auto-cleanup)
sess = session.Session()
Or persistent session with specific path
sess = session.Session("/path/to/data")
Create database and tables
sess.query("CREATE DATABASE IF NOT EXISTS analytics ENGINE = Atomic")
sess.query("USE analytics")
sess.query("""
CREATE TABLE sales (
id UInt64,
product String,
amount Decimal(10,2),
sale_date Date
) ENGINE = MergeTree()
ORDER BY (sale_date, id)
""")
Insert data
sess.query("""
INSERT INTO sales VALUES
(1, 'Laptop', 999.99, '2024-01-15'),
(2, 'Mouse', 29.99, '2024-01-16'),
(3, 'Keyboard', 79.99, '2024-01-17')
""")
Create materialized views | {"source_file": "python.md"} | [
0.0014469436137005687,
-0.018185393884778023,
0.04497956112027168,
-0.010756285861134529,
-0.002535979263484478,
-0.03841884434223175,
-0.025553178042173386,
0.03633229061961174,
-0.01749761775135994,
-0.012841086834669113,
0.033947959542274475,
-0.0016047804383561015,
-0.05385856702923775,
... |
2e207e36-17ee-4cde-b7cb-80e270e3020e | Create materialized views
sess.query("""
CREATE MATERIALIZED VIEW daily_sales AS
SELECT
sale_date,
count() as orders,
sum(amount) as revenue
FROM sales
GROUP BY sale_date
""")
Query the view
result = sess.query("SELECT * FROM daily_sales ORDER BY sale_date", "Pretty")
print(result)
Session automatically manages resources
sess.close() # Optional - auto-closed when object is deleted
```
Advanced session features {#advanced-session-features}
```python
Session with custom settings
sess = session.Session(
path="/tmp/analytics_db",
)
Query performance optimization
result = sess.query("""
SELECT product, sum(amount) as total
FROM sales
GROUP BY product
ORDER BY total DESC
SETTINGS max_threads = 4
""", "JSON")
```
See also:
test_stateful.py
.
Python DB-API 2.0 interface {#python-db-api-20}
Standard database interface for compatibility with existing Python applications:
```python
import chdb.dbapi as dbapi
Check driver information
print(f"chDB driver version: {dbapi.get_client_info()}")
Create connection
conn = dbapi.connect()
cursor = conn.cursor()
Execute queries with parameters
cursor.execute("""
SELECT number, number * ? as doubled
FROM system.numbers
LIMIT ?
""", (2, 5))
Get metadata
print("Column descriptions:", cursor.description)
print("Row count:", cursor.rowcount)
Fetch results
print("First row:", cursor.fetchone())
print("Next 2 rows:", cursor.fetchmany(2))
Fetch remaining rows
for row in cursor.fetchall():
print("Row:", row)
Batch operations
data = [(1, 'Alice'), (2, 'Bob'), (3, 'Charlie')]
cursor.execute("""
CREATE TABLE temp_users (
id UInt64,
name String
) ENGINE = MergeTree()
ORDER BY (id)
""")
cursor.executemany(
"INSERT INTO temp_users (id, name) VALUES (?, ?)",
data
)
```
User defined functions (UDF) {#user-defined-functions}
Extend SQL with custom Python functions:
Basic UDF usage {#basic-udf-usage}
```python
from chdb.udf import chdb_udf
from chdb import query
Simple mathematical function
@chdb_udf()
def add_numbers(a, b):
return int(a) + int(b)
String processing function
@chdb_udf()
def reverse_string(text):
return text[::-1]
JSON processing function
@chdb_udf()
def extract_json_field(json_str, field):
import json
try:
data = json.loads(json_str)
return str(data.get(field, ''))
except:
return ''
Use UDFs in queries
result = query("""
SELECT
add_numbers('10', '20') as sum_result,
reverse_string('hello') as reversed,
extract_json_field('{"name": "John", "age": 30}', 'name') as name
""")
print(result)
```
Advanced UDF with custom return types {#advanced-udf-custom-return-types}
```python
UDF with specific return type | {"source_file": "python.md"} | [
-0.03634520620107651,
0.031390924006700516,
-0.07504408806562424,
0.07390312850475311,
-0.13470855355262756,
-0.04515589401125908,
0.05266699194908142,
0.03522516041994095,
-0.023638639599084854,
0.013691818341612816,
0.00359350792132318,
0.013171014375984669,
0.07880927622318268,
-0.05421... |
c4134348-8f60-4460-a8ff-2604061827d0 | Advanced UDF with custom return types {#advanced-udf-custom-return-types}
```python
UDF with specific return type
@chdb_udf(return_type="Float64")
def calculate_bmi(height_str, weight_str):
height = float(height_str) / 100 # Convert cm to meters
weight = float(weight_str)
return weight / (height * height)
UDF for data validation
@chdb_udf(return_type="UInt8")
def is_valid_email(email):
import re
pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+.[a-zA-Z]{2,}$'
return 1 if re.match(pattern, email) else 0
Use in complex queries
result = query("""
SELECT
name,
calculate_bmi(height, weight) as bmi,
is_valid_email(email) as has_valid_email
FROM (
SELECT
'John' as name, '180' as height, '75' as weight, 'john@example.com' as email
UNION ALL
SELECT
'Jane' as name, '165' as height, '60' as weight, 'invalid-email' as email
)
""", "Pretty")
print(result)
```
UDF best practices {#udf-best-practices}
Stateless Functions
: UDFs should be pure functions without side effects
Import Inside Functions
: All required modules must be imported within the UDF
String Input/Output
: All UDF parameters are strings (TabSeparated format)
Error Handling
: Include try-catch blocks for robust UDFs
Performance
: UDFs are called for each row, so optimize for performance
```python
Well-structured UDF with error handling
@chdb_udf(return_type="String")
def safe_json_extract(json_str, path):
import json
try:
data = json.loads(json_str)
keys = path.split('.')
result = data
for key in keys:
if isinstance(result, dict) and key in result:
result = result[key]
else:
return 'null'
return str(result)
except Exception as e:
return f'error: {str(e)}'
Use with complex nested JSON
query("""
SELECT safe_json_extract(
'{"user": {"profile": {"name": "Alice", "age": 25}}}',
'user.profile.name'
) as extracted_name
""")
```
Streaming query processing {#streaming-queries}
Process large datasets with constant memory usage:
```python
from chdb import session
sess = session.Session()
Setup large dataset
sess.query("""
CREATE TABLE large_data ENGINE = Memory() AS
SELECT number as id, toString(number) as data
FROM numbers(1000000)
""")
Example 1: Basic streaming with context manager
total_rows = 0
with sess.send_query("SELECT * FROM large_data", "CSV") as stream:
for chunk in stream:
chunk_rows = len(chunk.data().split('\n')) - 1
total_rows += chunk_rows
print(f"Processed chunk: {chunk_rows} rows")
# Early termination if needed
if total_rows > 100000:
break
print(f"Total rows processed: {total_rows}")
Example 2: Manual iteration with explicit cleanup | {"source_file": "python.md"} | [
-0.027766704559326172,
0.06897228956222534,
0.014864913187921047,
0.06992380321025848,
-0.049894120544195175,
-0.016091646626591682,
0.06518267095088959,
0.03355700522661209,
-0.05729583650827408,
-0.020435670390725136,
0.009122414514422417,
-0.13873952627182007,
0.026789091527462006,
0.05... |
6c52a2b6-fbdd-4b72-9647-1e2d65f69b6c | # Early termination if needed
if total_rows > 100000:
break
print(f"Total rows processed: {total_rows}")
Example 2: Manual iteration with explicit cleanup
stream = sess.send_query("SELECT * FROM large_data WHERE id % 100 = 0", "JSONEachRow")
processed_count = 0
while True:
chunk = stream.fetch()
if chunk is None:
break
# Process chunk data
lines = chunk.data().strip().split('\n')
for line in lines:
if line: # Skip empty lines
processed_count += 1
print(f"Processed {processed_count} records so far...")
stream.close() # Important: explicit cleanup
Example 3: Arrow integration for external libraries
import pyarrow as pa
from deltalake import write_deltalake
Stream results in Arrow format
stream = sess.send_query("SELECT * FROM large_data LIMIT 100000", "Arrow")
Create RecordBatchReader with custom batch size
batch_reader = stream.record_batch(rows_per_batch=10000)
Export to Delta Lake
write_deltalake(
table_or_uri="./my_delta_table",
data=batch_reader,
mode="overwrite"
)
stream.close()
sess.close()
```
Python table engine {#python-table-engine}
Query Pandas DataFrames {#query-pandas-dataframes}
```python
import chdb
import pandas as pd
Complex DataFrame with nested data
df = pd.DataFrame({
"customer_id": [1, 2, 3, 4, 5, 6],
"customer_name": ["Alice", "Bob", "Charlie", "Alice", "Bob", "David"],
"orders": [
{"order_id": 101, "amount": 250.50, "items": ["laptop", "mouse"]},
{"order_id": 102, "amount": 89.99, "items": ["book"]},
{"order_id": 103, "amount": 1299.99, "items": ["phone", "case", "charger"]},
{"order_id": 104, "amount": 45.50, "items": ["pen", "paper"]},
{"order_id": 105, "amount": 199.99, "items": ["headphones"]},
{"order_id": 106, "amount": 15.99, "items": ["cable"]}
]
})
Advanced querying with JSON operations
result = chdb.query("""
SELECT
customer_name,
count() as order_count,
sum(toFloat64(orders.amount)) as total_spent,
arrayStringConcat(
arrayDistinct(
arrayFlatten(
groupArray(orders.items)
)
),
', '
) as all_items
FROM Python(df)
GROUP BY customer_name
HAVING total_spent > 100
ORDER BY total_spent DESC
""").show()
Window functions on DataFrames
window_result = chdb.query("""
SELECT
customer_name,
toFloat64(orders.amount) as amount,
sum(toFloat64(orders.amount)) OVER (
PARTITION BY customer_name
ORDER BY toInt32(orders.order_id)
) as running_total
FROM Python(df)
ORDER BY customer_name, toInt32(orders.order_id)
""", "Pretty")
print(window_result)
```
Custom data sources with PyReader {#custom-data-sources-pyreader}
Implement custom data readers for specialized data sources: | {"source_file": "python.md"} | [
-0.03631693869829178,
0.036599721759557724,
0.02352534420788288,
-0.002614459488540888,
-0.061937689781188965,
-0.07470503449440002,
0.009467129595577717,
0.036542415618896484,
-0.08253433555364609,
-0.015691621229052544,
0.011690751649439335,
0.030756531283259392,
-0.040834806859493256,
-... |
1276836c-0aa9-45d3-b44f-2174a7834321 | Custom data sources with PyReader {#custom-data-sources-pyreader}
Implement custom data readers for specialized data sources:
```python
import chdb
from typing import List, Tuple, Any
import json
class DatabaseReader(chdb.PyReader):
"""Custom reader for database-like data sources"""
def __init__(self, connection_string: str):
# Simulate database connection
self.data = self._load_data(connection_string)
self.cursor = 0
self.batch_size = 1000
super().__init__(self.data)
def _load_data(self, conn_str):
# Simulate loading from database
return {
"id": list(range(1, 10001)),
"name": [f"user_{i}" for i in range(1, 10001)],
"score": [i * 10 + (i % 7) for i in range(1, 10001)],
"metadata": [
json.dumps({"level": i % 5, "active": i % 3 == 0})
for i in range(1, 10001)
]
}
def get_schema(self) -> List[Tuple[str, str]]:
"""Define table schema with explicit types"""
return [
("id", "UInt64"),
("name", "String"),
("score", "Int64"),
("metadata", "String") # JSON stored as string
]
def read(self, col_names: List[str], count: int) -> List[List[Any]]:
"""Read data in batches"""
if self.cursor >= len(self.data["id"]):
return [] # No more data
end_pos = min(self.cursor + min(count, self.batch_size), len(self.data["id"]))
# Return data for requested columns
result = []
for col in col_names:
if col in self.data:
result.append(self.data[col][self.cursor:end_pos])
else:
# Handle missing columns
result.append([None] * (end_pos - self.cursor))
self.cursor = end_pos
return result
JSON Type Inference and Handling {#json-type-inference-handling}
chDB automatically handles complex nested data structures:
```python
import pandas as pd
import chdb
DataFrame with mixed JSON objects
df_with_json = pd.DataFrame({
"user_id": [1, 2, 3, 4],
"profile": [
{"name": "Alice", "age": 25, "preferences": ["music", "travel"]},
{"name": "Bob", "age": 30, "location": {"city": "NYC", "country": "US"}},
{"name": "Charlie", "skills": ["python", "sql", "ml"], "experience": 5},
{"score": 95, "rank": "gold", "achievements": [{"title": "Expert", "date": "2024-01-01"}]}
]
})
Control JSON inference with settings
result = chdb.query("""
SELECT
user_id,
profile.name as name,
profile.age as age,
length(profile.preferences) as pref_count,
profile.location.city as city
FROM Python(df_with_json)
SETTINGS pandas_analyze_sample = 1000 -- Analyze all rows for JSON detection
""", "Pretty")
print(result)
Advanced JSON operations | {"source_file": "python.md"} | [
-0.023261992260813713,
0.0010343861067667603,
-0.11390047520399094,
0.013546361587941647,
-0.12549395859241486,
-0.0377037487924099,
0.055062875151634216,
0.06912019848823547,
-0.06648467481136322,
-0.07086654752492905,
-0.006623027380555868,
-0.039283156394958496,
0.026931727305054665,
-0... |
1fa90068-a5b5-4e0a-a6b6-50729ccbd0a1 | Advanced JSON operations
complex_json = chdb.query("""
SELECT
user_id,
JSONLength(toString(profile)) as json_fields,
JSONType(toString(profile), 'preferences') as pref_type,
if(
JSONHas(toString(profile), 'achievements'),
JSONExtractString(toString(profile), 'achievements[0].title'),
'None'
) as first_achievement
FROM Python(df_with_json)
""", "JSONEachRow")
print(complex_json)
```
Performance and optimization {#performance-optimization}
Benchmarks {#benchmarks}
chDB consistently outperforms other embedded engines:
-
DataFrame operations
: 2-5x faster than traditional DataFrame libraries for analytical queries
-
Parquet processing
: Competitive with leading columnar engines
-
Memory efficiency
: Lower memory footprint than alternatives
More benchmark result details
Performance tips {#performance-tips}
```python
import chdb
1. Use appropriate output formats
df_result = chdb.query("SELECT * FROM large_table", "DataFrame") # For analysis
arrow_result = chdb.query("SELECT * FROM large_table", "Arrow") # For interop
native_result = chdb.query("SELECT * FROM large_table", "Native") # For chDB-to-chDB
2. Optimize queries with settings
fast_result = chdb.query("""
SELECT customer_id, sum(amount)
FROM sales
GROUP BY customer_id
SETTINGS
max_threads = 8,
max_memory_usage = '4G',
use_uncompressed_cache = 1
""", "DataFrame")
3. Leverage streaming for large datasets
from chdb import session
sess = session.Session()
Setup large dataset
sess.query("""
CREATE TABLE large_sales ENGINE = Memory() AS
SELECT
number as sale_id,
number % 1000 as customer_id,
rand() % 1000 as amount
FROM numbers(10000000)
""")
Stream processing with constant memory usage
total_amount = 0
processed_rows = 0
with sess.send_query("SELECT customer_id, sum(amount) as total FROM large_sales GROUP BY customer_id", "JSONEachRow") as stream:
for chunk in stream:
lines = chunk.data().strip().split('\n')
for line in lines:
if line: # Skip empty lines
import json
row = json.loads(line)
total_amount += row['total']
processed_rows += 1
print(f"Processed {processed_rows} customer records, running total: {total_amount}")
# Early termination for demo
if processed_rows > 1000:
break
print(f"Final result: {processed_rows} customers processed, total amount: {total_amount}")
Stream to external systems (e.g., Delta Lake)
stream = sess.send_query("SELECT * FROM large_sales LIMIT 1000000", "Arrow")
batch_reader = stream.record_batch(rows_per_batch=50000)
Process in batches
for batch in batch_reader:
print(f"Processing batch with {batch.num_rows} rows...")
# Transform or export each batch
# df_batch = batch.to_pandas()
# process_batch(df_batch) | {"source_file": "python.md"} | [
-0.0018395850202068686,
0.06640264391899109,
-0.0589081346988678,
0.0011505126021802425,
-0.048282165080308914,
-0.05841516703367233,
0.02603948675096035,
0.06697714328765869,
-0.07054836302995682,
0.005257549695670605,
0.0042096711695194244,
-0.016476759687066078,
-0.009544325061142445,
-... |
eb9a2e13-9fec-4ae6-8942-c97d1ccbfc87 | for batch in batch_reader:
print(f"Processing batch with {batch.num_rows} rows...")
# Transform or export each batch
# df_batch = batch.to_pandas()
# process_batch(df_batch)
stream.close()
sess.close()
```
GitHub repository {#github-repository}
Main Repository
:
chdb-io/chdb
Issues and Support
: Report issues on the
GitHub repository | {"source_file": "python.md"} | [
0.019268976524472237,
-0.0575907863676548,
-0.13656923174858093,
0.021709706634283066,
0.06122028827667236,
-0.006149365101009607,
0.0012970492243766785,
-0.00004020468259113841,
-0.07721854001283646,
0.05682049319148064,
0.03192499279975891,
0.0230756476521492,
-0.023388870060443878,
-0.0... |
275771ac-1c9d-4ff4-9e18-1f6f2f9afe98 | title: 'chDB for C and C++'
sidebar_label: 'C and C++'
slug: /chdb/install/c
description: 'How to install and use chDB with C and C++'
keywords: ['chdb', 'c', 'cpp', 'embedded', 'clickhouse', 'sql', 'olap', 'api']
doc_type: 'guide'
chDB for C and C++
chDB provides a native C/C++ API for embedding ClickHouse functionality directly into your applications. The API supports both simple queries and advanced features like persistent connections and streaming query results.
Installation {#installation}
Step 1: Install libchdb {#install-libchdb}
Install the chDB library on your system:
bash
curl -sL https://lib.chdb.io | bash
Step 2: Include headers {#include-headers}
Include the chDB header in your project:
```c
include
```
Step 3: Link library {#link-library}
Compile and link your application with chDB:
```bash
C compilation
gcc -o myapp myapp.c -lchdb
C++ compilation
g++ -o myapp myapp.cpp -lchdb
```
C Examples {#c-examples}
Basic connection and queries {#basic-connection-queries}
```c
include
include
int main() {
// Create connection arguments
char* args[] = {"chdb", "--path", "/tmp/chdb-data"};
int argc = 3;
// Connect to chDB
chdb_connection* conn = chdb_connect(argc, args);
if (!conn) {
printf("Failed to connect to chDB\n");
return 1;
}
// Execute a query
chdb_result* result = chdb_query(*conn, "SELECT version()", "CSV");
if (!result) {
printf("Query execution failed\n");
chdb_close_conn(conn);
return 1;
}
// Check for errors
const char* error = chdb_result_error(result);
if (error) {
printf("Query error: %s\n", error);
} else {
// Get result data
char* data = chdb_result_buffer(result);
size_t length = chdb_result_length(result);
double elapsed = chdb_result_elapsed(result);
uint64_t rows = chdb_result_rows_read(result);
printf("Result: %.*s\n", (int)length, data);
printf("Elapsed: %.3f seconds\n", elapsed);
printf("Rows: %llu\n", rows);
}
// Cleanup
chdb_destroy_query_result(result);
chdb_close_conn(conn);
return 0;
}
```
Streaming queries {#streaming-queries}
```c
include
include
int main() {
char
args[] = {"chdb", "--path", "/tmp/chdb-stream"};
chdb_connection
conn = chdb_connect(3, args);
if (!conn) {
printf("Failed to connect\n");
return 1;
}
// Start streaming query
chdb_result* stream_result = chdb_stream_query(*conn,
"SELECT number FROM system.numbers LIMIT 1000000", "CSV");
if (!stream_result) {
printf("Failed to start streaming query\n");
chdb_close_conn(conn);
return 1;
}
uint64_t total_rows = 0;
// Process chunks
while (true) {
chdb_result* chunk = chdb_stream_fetch_result(*conn, stream_result);
if (!chunk) break;
// Check if we have data in this chunk
size_t chunk_length = chdb_result_length(chunk);
if (chunk_length == 0) {
chdb_destroy_query_result(chunk);
break; // End of stream
} | {"source_file": "c.md"} | [
-0.03111855685710907,
-0.0318671278655529,
-0.07811755686998367,
0.05716872587800026,
-0.05252636596560478,
0.04016805812716484,
-0.004295242950320244,
0.054799776524305344,
-0.052438147366046906,
-0.04368148371577263,
-0.015410639345645905,
-0.06742332130670547,
0.09295450896024704,
-0.06... |
ad506467-be98-433a-b73e-4ad46c7d634d | uint64_t chunk_rows = chdb_result_rows_read(chunk);
total_rows += chunk_rows;
printf("Processed chunk: %llu rows, %zu bytes\n", chunk_rows, chunk_length);
// Process the chunk data here
// char* data = chdb_result_buffer(chunk);
chdb_destroy_query_result(chunk);
// Progress reporting
if (total_rows % 100000 == 0) {
printf("Progress: %llu rows processed\n", total_rows);
}
}
printf("Streaming complete. Total rows: %llu\n", total_rows);
// Cleanup streaming query
chdb_destroy_query_result(stream_result);
chdb_close_conn(conn);
return 0;
}
```
Working with different data formats {#data-formats}
```c
include
include
int main() {
char
args[] = {"chdb"};
chdb_connection
conn = chdb_connect(1, args);
const char* query = "SELECT number, toString(number) as str FROM system.numbers LIMIT 3";
// CSV format
chdb_result* csv_result = chdb_query(*conn, query, "CSV");
printf("CSV Result:\n%.*s\n\n",
(int)chdb_result_length(csv_result),
chdb_result_buffer(csv_result));
chdb_destroy_query_result(csv_result);
// JSON format
chdb_result* json_result = chdb_query(*conn, query, "JSON");
printf("JSON Result:\n%.*s\n\n",
(int)chdb_result_length(json_result),
chdb_result_buffer(json_result));
chdb_destroy_query_result(json_result);
// Pretty format
chdb_result* pretty_result = chdb_query(*conn, query, "Pretty");
printf("Pretty Result:\n%.*s\n\n",
(int)chdb_result_length(pretty_result),
chdb_result_buffer(pretty_result));
chdb_destroy_query_result(pretty_result);
chdb_close_conn(conn);
return 0;
}
```
C++ example {#cpp-example}
```cpp
include
include
include
include
class ChDBConnection {
private:
chdb_connection* conn;
public:
ChDBConnection(const std::vector
& args) {
// Convert string vector to char* array
std::vector
argv;
for (const auto& arg : args) {
argv.push_back(const_cast
(arg.c_str()));
}
conn = chdb_connect(argv.size(), argv.data());
if (!conn) {
throw std::runtime_error("Failed to connect to chDB");
}
}
~ChDBConnection() {
if (conn) {
chdb_close_conn(conn);
}
}
std::string query(const std::string& sql, const std::string& format = "CSV") {
chdb_result* result = chdb_query(*conn, sql.c_str(), format.c_str());
if (!result) {
throw std::runtime_error("Query execution failed");
}
const char* error = chdb_result_error(result);
if (error) {
std::string error_msg(error);
chdb_destroy_query_result(result);
throw std::runtime_error("Query error: " + error_msg);
}
std::string data(chdb_result_buffer(result), chdb_result_length(result)); | {"source_file": "c.md"} | [
0.011705633252859116,
0.06642868369817734,
-0.07214930653572083,
-0.014503687620162964,
-0.08766099810600281,
-0.01650616154074669,
-0.006639737170189619,
0.06018966808915138,
-0.015367487445473671,
0.004677139222621918,
-0.030004754662513733,
-0.052946243435144424,
-0.005370707251131535,
... |
aa251db2-0415-4eb3-a7aa-c49e7793fde3 | std::string data(chdb_result_buffer(result), chdb_result_length(result));
// Get query statistics
std::cout << "Query statistics:\n";
std::cout << " Elapsed: " << chdb_result_elapsed(result) << " seconds\n";
std::cout << " Rows read: " << chdb_result_rows_read(result) << "\n";
std::cout << " Bytes read: " << chdb_result_bytes_read(result) << "\n";
chdb_destroy_query_result(result);
return data;
}
};
int main() {
try {
// Create connection
ChDBConnection db({{"chdb", "--path", "/tmp/chdb-cpp"}});
// Create and populate table
db.query("CREATE TABLE test (id UInt32, value String) ENGINE = MergeTree() ORDER BY id");
db.query("INSERT INTO test VALUES (1, 'hello'), (2, 'world'), (3, 'chdb')");
// Query with different formats
std::cout << "CSV Results:\n" << db.query("SELECT * FROM test", "CSV") << "\n";
std::cout << "JSON Results:\n" << db.query("SELECT * FROM test", "JSON") << "\n";
// Aggregation query
std::cout << "Count: " << db.query("SELECT COUNT(*) FROM test") << "\n";
} catch (const std::exception& e) {
std::cerr << "Error: " << e.what() << std::endl;
return 1;
}
return 0;
}
```
Error handling best practices {#error-handling}
```c
include
include
int safe_query_example() {
chdb_connection
conn = NULL;
chdb_result
result = NULL;
int return_code = 0;
// Create connection
char* args[] = {"chdb"};
conn = chdb_connect(1, args);
if (!conn) {
printf("Failed to create connection\n");
return 1;
}
// Execute query
result = chdb_query(*conn, "SELECT invalid_syntax", "CSV");
if (!result) {
printf("Query execution failed\n");
return_code = 1;
goto cleanup;
}
// Check for query errors
const char* error = chdb_result_error(result);
if (error) {
printf("Query error: %s\n", error);
return_code = 1;
goto cleanup;
}
// Process successful result
printf("Result: %.*s\n",
(int)chdb_result_length(result),
chdb_result_buffer(result));
cleanup:
if (result) chdb_destroy_query_result(result);
if (conn) chdb_close_conn(conn);
return return_code;
}
```
GitHub repository {#github-repository}
Main Repository
:
chdb-io/chdb
Issues and Support
: Report issues on the
GitHub repository
C API Documentation
:
Bindings Documentation | {"source_file": "c.md"} | [
0.03868000581860542,
0.031080007553100586,
-0.08756458759307861,
0.023410361260175705,
-0.16042836010456085,
0.008213807828724384,
0.010132409632205963,
0.08252430707216263,
0.017938919365406036,
0.007048071827739477,
0.04044044390320778,
-0.0828971415758133,
0.07838645577430725,
-0.133908... |
ff12babd-f094-47cb-a88a-ed94587dbb8c | title: 'Installing chDB for Rust'
sidebar_label: 'Rust'
slug: /chdb/install/rust
description: 'How to install and use chDB Rust bindingsd'
keywords: ['chdb', 'embedded', 'clickhouse-lite', 'rust', 'install', 'ffi', 'bindings']
doc_type: 'guide'
chDB for Rust {#chdb-for-rust}
chDB-rust provides experimental FFI (Foreign Function Interface) bindings for chDB, enabling you to run ClickHouse queries directly in your Rust applications with zero external dependencies.
Installation {#installation}
Install libchdb {#install-libchdb}
Install the chDB library:
bash
curl -sL https://lib.chdb.io | bash
Usage {#usage}
chDB Rust provides both stateless and stateful query execution modes.
Stateless usage {#stateless-usage}
For simple queries without persistent state:
```rust
use chdb_rust::{execute, arg::Arg, format::OutputFormat};
fn main() -> Result<(), Box
> {
// Execute a simple query
let result = execute(
"SELECT version()",
Some(&[Arg::OutputFormat(OutputFormat::JSONEachRow)])
)?;
println!("ClickHouse version: {}", result.data_utf8()?);
// Query with CSV file
let result = execute(
"SELECT * FROM file('data.csv', 'CSV')",
Some(&[Arg::OutputFormat(OutputFormat::JSONEachRow)])
)?;
println!("CSV data: {}", result.data_utf8()?);
Ok(())
}
```
Stateful usage (Sessions) {#stateful-usage-sessions}
For queries requiring persistent state like databases and tables:
```rust
use chdb_rust::{
session::SessionBuilder,
arg::Arg,
format::OutputFormat,
log_level::LogLevel
};
use tempdir::TempDir;
fn main() -> Result<(), Box
> {
// Create a temporary directory for database storage
let tmp = TempDir::new("chdb-rust")?;
// Build session with configuration
let session = SessionBuilder::new()
.with_data_path(tmp.path())
.with_arg(Arg::LogLevel(LogLevel::Debug))
.with_auto_cleanup(true) // Cleanup on drop
.build()?;
// Create database and table
session.execute(
"CREATE DATABASE demo; USE demo",
Some(&[Arg::MultiQuery])
)?;
session.execute(
"CREATE TABLE logs (id UInt64, msg String) ENGINE = MergeTree() ORDER BY id",
None,
)?;
// Insert data
session.execute(
"INSERT INTO logs (id, msg) VALUES (1, 'Hello'), (2, 'World')",
None,
)?;
// Query data
let result = session.execute(
"SELECT * FROM logs ORDER BY id",
Some(&[Arg::OutputFormat(OutputFormat::JSONEachRow)]),
)?;
println!("Query results:\n{}", result.data_utf8()?);
// Get query statistics
println!("Rows read: {}", result.rows_read());
println!("Bytes read: {}", result.bytes_read());
println!("Query time: {:?}", result.elapsed());
Ok(())
}
```
Building and testing {#building-testing}
Build the project {#build-the-project}
bash
cargo build
Run tests {#run-tests}
bash
cargo test
Development dependencies {#development-dependencies} | {"source_file": "rust.md"} | [
-0.035792864859104156,
-0.013359639793634415,
-0.04618951678276062,
0.07489810883998871,
-0.04049515724182129,
0.058688051998615265,
0.022904107347130775,
0.04242471233010292,
-0.029435381293296814,
-0.10013492405414581,
-0.06533683091402054,
-0.048234302550554276,
0.050029926002025604,
-0... |
f2906726-5848-4f3b-a1de-c42d3d2c12c5 | Build the project {#build-the-project}
bash
cargo build
Run tests {#run-tests}
bash
cargo test
Development dependencies {#development-dependencies}
The project includes these development dependencies:
-
bindgen
(v0.70.1) - Generate FFI bindings from C headers
-
tempdir
(v0.3.7) - Temporary directory handling in tests
-
thiserror
(v1) - Error handling utilities
Error handling {#error-handling}
chDB Rust provides comprehensive error handling through the
Error
enum:
```rust
use chdb_rust::{execute, error::Error};
match execute("SELECT 1", None) {
Ok(result) => {
println!("Success: {}", result.data_utf8()?);
},
Err(Error::QueryError(msg)) => {
eprintln!("Query failed: {}", msg);
},
Err(Error::NoResult) => {
eprintln!("No result returned");
},
Err(Error::NonUtf8Sequence(e)) => {
eprintln!("Invalid UTF-8: {}", e);
},
Err(e) => {
eprintln!("Other error: {}", e);
}
}
```
GitHub repository {#github-repository}
You can find the GitHub repository for the project at
chdb-io/chdb-rust
. | {"source_file": "rust.md"} | [
-0.08030891418457031,
0.03848765790462494,
-0.016979405656456947,
0.061555661261081696,
0.013523905538022518,
-0.015703357756137848,
-0.029894959181547165,
0.05327528342604637,
-0.034341562539339066,
-0.06277267634868622,
-0.024802058935165405,
-0.1103707030415535,
0.03337452560663223,
-0.... |
8f3e39f0-ad70-45d5-9525-4d92d849b4a8 | title: 'Language Integrations Index'
slug: /chdb/install
description: 'Index page for chDB language integrations'
keywords: ['python', 'NodeJS', 'Go', 'Rust', 'Bun', 'C', 'C++']
doc_type: 'landing-page'
Instructions for how to get setup with chDB are available below for the following languages and runtimes:
| Language | API Reference |
|----------------------------------------|-------------------------------------|
|
Python
|
Python API
|
|
NodeJS
| |
|
Go
| |
|
Rust
| |
|
Bun
| |
|
C and C++
| | | {"source_file": "index.md"} | [
0.04059309884905815,
-0.03354872390627861,
-0.038560815155506134,
0.015497047454118729,
-0.05725805461406708,
0.07764478027820587,
-0.01341046392917633,
0.05436445400118828,
-0.07435300201177597,
-0.04478635638952255,
-0.02364182099699974,
-0.0799628347158432,
0.04427166283130646,
-0.00367... |
222a1151-baaf-4750-b0c6-cc9c3c7bc9f7 | title: 'chDB for Node.js'
sidebar_label: 'Node.js'
slug: /chdb/install/nodejs
description: 'How to install and use chDB with Node.js'
keywords: ['chdb', 'nodejs', 'javascript', 'embedded', 'clickhouse', 'sql', 'olap']
doc_type: 'guide'
chDB for Node.js
chDB-node provides Node.js bindings for chDB, enabling you to run ClickHouse queries directly in your Node.js applications with zero external dependencies.
Installation {#installation}
bash
npm install chdb
Usage {#usage}
chDB-node supports two query modes: standalone queries for simple operations and session-based queries for maintaining database state.
Standalone queries {#standalone-queries}
For simple, one-off queries that don't require persistent state:
```javascript
const { query } = require("chdb");
// Basic query
const result = query("SELECT version()", "CSV");
console.log("ClickHouse version:", result);
// Query with multiple columns
const multiResult = query("SELECT 'Hello' as greeting, 'chDB' as engine, 42 as answer", "CSV");
console.log("Multi-column result:", multiResult);
// Mathematical operations
const mathResult = query("SELECT 2 + 2 as sum, pi() as pi_value", "JSON");
console.log("Math result:", mathResult);
// System information
const systemInfo = query("SELECT * FROM system.functions LIMIT 5", "Pretty");
console.log("System functions:", systemInfo);
```
Session-Based queries {#session-based-queries}
```javascript
const { Session } = require("chdb");
// Create a session with persistent storage
const session = new Session("./chdb-node-data");
try {
// Create database and table
session.query(
CREATE DATABASE IF NOT EXISTS myapp;
CREATE TABLE IF NOT EXISTS myapp.users (
id UInt32,
name String,
email String,
created_at DateTime DEFAULT now()
) ENGINE = MergeTree() ORDER BY id
);
// Insert sample data
session.query(`
INSERT INTO myapp.users (id, name, email) VALUES
(1, 'Alice', 'alice@example.com'),
(2, 'Bob', 'bob@example.com'),
(3, 'Charlie', 'charlie@example.com')
`);
// Query the data with different formats
const csvResult = session.query("SELECT * FROM myapp.users ORDER BY id", "CSV");
console.log("CSV Result:", csvResult);
const jsonResult = session.query("SELECT * FROM myapp.users ORDER BY id", "JSON");
console.log("JSON Result:", jsonResult);
// Aggregate queries
const stats = session.query(`
SELECT
COUNT(*) as total_users,
MAX(id) as max_id,
MIN(created_at) as earliest_signup
FROM myapp.users
`, "Pretty");
console.log("User Statistics:", stats);
} finally {
// Always cleanup the session
session.cleanup(); // This deletes the database files
}
```
Processing external data {#processing-external-data}
```javascript
const { Session } = require("chdb");
const session = new Session("./data-processing"); | {"source_file": "nodejs.md"} | [
0.006890752352774143,
0.0017572372453287244,
0.00885856430977583,
0.09582941979169846,
0.007299254648387432,
0.005812703166157007,
-0.002061939798295498,
0.04251288250088692,
-0.007069624960422516,
-0.010525520890951157,
-0.05887928605079651,
0.012149843387305737,
0.04050431028008461,
-0.0... |
88523fef-d99c-462c-ad7d-617f1f9555e8 | Processing external data {#processing-external-data}
```javascript
const { Session } = require("chdb");
const session = new Session("./data-processing");
try {
// Process CSV data from URL
const result = session.query(
SELECT
COUNT(*) as total_records,
COUNT(DISTINCT "UserID") as unique_users
FROM url('https://datasets.clickhouse.com/hits/hits.csv', 'CSV')
LIMIT 1000
, "JSON");
console.log("External data analysis:", result);
// Create table from external data
session.query(`
CREATE TABLE web_analytics AS
SELECT * FROM url('https://datasets.clickhouse.com/hits/hits.csv', 'CSV')
LIMIT 10000
`);
// Analyze the imported data
const analysis = session.query(`
SELECT
toDate("EventTime") as date,
COUNT(*) as events,
COUNT(DISTINCT "UserID") as unique_users
FROM web_analytics
GROUP BY date
ORDER BY date
LIMIT 10
`, "Pretty");
console.log("Daily analytics:", analysis);
} finally {
session.cleanup();
}
```
Error handling {#error-handling}
Always handle errors appropriately when working with chDB:
```javascript
const { query, Session } = require("chdb");
// Error handling for standalone queries
function safeQuery(sql, format = "CSV") {
try {
const result = query(sql, format);
return { success: true, data: result };
} catch (error) {
console.error("Query error:", error.message);
return { success: false, error: error.message };
}
}
// Example usage
const result = safeQuery("SELECT invalid_syntax");
if (result.success) {
console.log("Query result:", result.data);
} else {
console.log("Query failed:", result.error);
}
// Error handling for sessions
function safeSessionQuery() {
const session = new Session("./error-test");
try {
// This will throw an error due to invalid syntax
const result = session.query("CREATE TABLE invalid syntax", "CSV");
console.log("Unexpected success:", result);
} catch (error) {
console.error("Session query error:", error.message);
} finally {
// Always cleanup, even if an error occurred
session.cleanup();
}
}
safeSessionQuery();
```
GitHub repository {#github-repository}
GitHub Repository
:
chdb-io/chdb-node
Issues and Support
: Report issues on the
GitHub repository
NPM Package
:
chdb on npm | {"source_file": "nodejs.md"} | [
-0.004234796389937401,
-0.005217842757701874,
-0.03992375731468201,
0.07643862813711166,
-0.052973899990320206,
-0.02285056561231613,
0.03867128863930702,
0.05916400998830795,
0.05225445330142975,
0.013347325846552849,
0.0027062769513577223,
-0.08308535814285278,
0.04031434282660484,
-0.02... |
5d8f953a-b0a7-49d0-b74b-599ff292d83c | title: 'How to query Parquet files'
sidebar_label: 'Querying Parquet files'
slug: /chdb/guides/querying-parquet
description: 'Learn how to query Parquet files with chDB.'
keywords: ['chdb', 'parquet']
doc_type: 'guide'
A lot of the world's data lives in Amazon S3 buckets.
In this guide, we'll learn how to query that data using chDB.
Setup {#setup}
Let's first create a virtual environment:
bash
python -m venv .venv
source .venv/bin/activate
And now we'll install chDB.
Make sure you have version 2.0.2 or higher:
bash
pip install "chdb>=2.0.2"
And now we're going to install IPython:
bash
pip install ipython
We're going to use
ipython
to run the commands in the rest of the guide, which you can launch by running:
bash
ipython
You can also use the code in a Python script or in your favorite notebook.
Exploring Parquet metadata {#exploring-parquet-metadata}
We're going to explore a Parquet file from the
Amazon reviews
dataset.
But first, let's install
chDB
:
python
import chdb
When querying Parquet files, we can use the
ParquetMetadata
input format to have it return Parquet metadata rather than the content of the file.
Let's use the
DESCRIBE
clause to see the fields returned when we use this format:
```python
query = """
DESCRIBE s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/amazon_reviews/amazon_reviews_2015.snappy.parquet',
ParquetMetadata
)
SETTINGS describe_compact_output=1
"""
chdb.query(query, 'TabSeparated')
```
text
num_columns UInt64
num_rows UInt64
num_row_groups UInt64
format_version String
metadata_size UInt64
total_uncompressed_size UInt64
total_compressed_size UInt64
columns Array(Tuple(name String, path String, max_definition_level UInt64, max_repetition_level UInt64, physical_type String, logical_type String, compression String, total_uncompressed_size UInt64, total_compressed_size UInt64, space_saved String, encodings Array(String)))
row_groups Array(Tuple(num_columns UInt64, num_rows UInt64, total_uncompressed_size UInt64, total_compressed_size UInt64, columns Array(Tuple(name String, path String, total_compressed_size UInt64, total_uncompressed_size UInt64, have_statistics Bool, statistics Tuple(num_values Nullable(UInt64), null_count Nullable(UInt64), distinct_count Nullable(UInt64), min Nullable(String), max Nullable(String))))))
Let's have now have a look at the metadata for this file.
columns
and
row_groups
both contain arrays of tuples containing many properties, so we'll exclude those for now.
```python
query = """
SELECT * EXCEPT(columns, row_groups)
FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/amazon_reviews/amazon_reviews_2015.snappy.parquet',
ParquetMetadata
)
"""
chdb.query(query, 'Vertical')
``` | {"source_file": "querying-parquet.md"} | [
0.0013748627388849854,
-0.021891923621296883,
-0.04537101089954376,
-0.07411344349384308,
-0.025910021737217903,
0.014339885674417019,
0.0022633629851043224,
0.05467495322227478,
-0.010874771513044834,
-0.0083719901740551,
-0.030501171946525574,
-0.002944964449852705,
0.013688778504729271,
... |
0eebe251-a8a4-4ccf-8e0b-f2460e7a3564 | chdb.query(query, 'Vertical')
```
text
Row 1:
──────
num_columns: 15
num_rows: 41905631
num_row_groups: 42
format_version: 2.6
metadata_size: 79730
total_uncompressed_size: 14615827169
total_compressed_size: 9272262304
From this output, we learn that this Parquet file has over 40 million rows, split across 42 row groups, with 15 columns of data per row.
A row group is a logical horizontal partitioning of the data into rows.
Each row group has associated metadata and querying tools can make use of that metadata to efficiently query the file.
Let's take a look at one of the row groups:
```python
query = """
WITH rowGroups AS (
SELECT rg
FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/amazon_reviews/amazon_reviews_2015.snappy.parquet',
ParquetMetadata
)
ARRAY JOIN row_groups AS rg
LIMIT 1
)
SELECT tupleElement(c, 'name') AS name, tupleElement(c, 'total_compressed_size') AS total_compressed_size,
tupleElement(c, 'total_uncompressed_size') AS total_uncompressed_size,
tupleElement(tupleElement(c, 'statistics'), 'min') AS min,
tupleElement(tupleElement(c, 'statistics'), 'max') AS max
FROM rowGroups
ARRAY JOIN tupleElement(rg, 'columns') AS c
"""
chdb.query(query, 'DataFrame')
``` | {"source_file": "querying-parquet.md"} | [
-0.03756513446569443,
0.03515464812517166,
-0.08156738430261612,
-0.04231417179107666,
-0.038559865206480026,
-0.03651715815067291,
-0.026853030547499657,
0.01252754032611847,
0.03509369120001793,
0.03325216844677925,
-0.0070855156518518925,
0.03855661302804947,
0.03251481056213379,
-0.081... |
6ac6f07a-3202-474e-b4fe-19672838e0cb | chdb.query(query, 'DataFrame')
```
text
name total_compressed_size total_uncompressed_size min max
0 review_date 493 646 16455 16472
1 marketplace 66 64 US US
2 customer_id 5207967 7997207 10049 53096413
3 review_id 14748425 17991290 R10004U8OQDOGE RZZZUTBAV1RYI
4 product_id 8003456 13969668 0000032050 BT00DDVMVQ
5 product_parent 5758251 7974737 645 999999730
6 product_title 41068525 63355320 ! Small S 1pc Black 1pc Navy (Blue) Replacemen... 🌴 Vacation On The Beach
7 product_category 1726 1815 Apparel Pet Products
8 star_rating 369036 374046 1 5
9 helpful_votes 538940 1022990 0 3440
10 total_votes 610902 1080520 0 3619
11 vine 11426 125999 0 1
12 verified_purchase 102634 125999 0 1
13 review_headline 16538189 27634740 🤹🏽♂️🎤Great product. Practice makes perfect. D...
14 review_body 145886383 232457911 🚅 +🐧=💥 😀
Querying Parquet files {#querying-parquet-files} | {"source_file": "querying-parquet.md"} | [
-0.0062596662901341915,
-0.0038040196523070335,
-0.032130636274814606,
0.057510778307914734,
0.036334212869405746,
0.016136659309267998,
0.05501610413193703,
-0.005290844943374395,
-0.06992214173078537,
0.009760490618646145,
0.0878283679485321,
-0.02365495264530182,
0.02832050994038582,
-0... |
785a95ec-3f9d-4fff-8f88-e502c2b09bac | Querying Parquet files {#querying-parquet-files}
Next, let's query the contents of the file.
We can do this by adjusting the above query to remove
ParquetMetadata
and then, say, compute the most popular
star_rating
across all reviews:
```python
query = """
SELECT star_rating, count() AS count, formatReadableQuantity(count)
FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/amazon_reviews/amazon_reviews_2015.snappy.parquet'
)
GROUP BY ALL
ORDER BY star_rating
"""
chdb.query(query, 'DataFrame')
```
text
star_rating count formatReadableQuantity(count())
0 1 3253070 3.25 million
1 2 1865322 1.87 million
2 3 3130345 3.13 million
3 4 6578230 6.58 million
4 5 27078664 27.08 million
Interestingly, there are more 5 star reviews than all the other ratings combined!
It looks like people like the products on Amazon or, if they don't, they just don't submit a rating. | {"source_file": "querying-parquet.md"} | [
-0.04865901917219162,
-0.050247613340616226,
-0.11288101226091385,
0.028190886601805687,
-0.04174061119556427,
0.0036062204744666815,
0.05147969722747803,
0.015246746130287647,
0.03682218864560127,
0.030200311914086342,
0.03853287920355797,
0.026479264721274376,
0.06896054744720459,
-0.090... |
b7dc767c-084f-4bcb-befe-8c0cdaf92cff | title: 'chDB Guides'
slug: /chdb/guides
description: 'Index page for chDB guides'
keywords: ['chdb', 'guides']
doc_type: 'landing-page'
Take a look at our chDB developer guides below:
| Page | Description |
|-----|-----|
|
How to query a remote ClickHouse server
| In this guide, we will learn how to query a remote ClickHouse server from chDB. |
|
How to query Apache Arrow with chDB
| In this guide, we will learn how to query Apache Arrow tables with chDB |
|
How to query data in an S3 bucket
| Learn how to query data in an S3 bucket with chDB. |
|
How to query Pandas DataFrames with chDB
| Learn how to query Pandas DataFrames with chDB |
|
How to query Parquet files
| Learn how to query Parquet files with chDB. |
|
JupySQL and chDB
| How to install chDB for Bun |
|
Using a clickhouse-local database
| Learn how to use a clickhouse-local database with chDB | | {"source_file": "index.md"} | [
0.06898429244756699,
-0.12304288148880005,
-0.018155986443161964,
-0.006457838695496321,
-0.03733781352639198,
-0.01653159223496914,
-0.008371955715119839,
-0.0034533629659563303,
-0.060820408165454865,
0.017256589606404305,
0.03694314509630203,
-0.028886742889881134,
0.008428889326751232,
... |
41c51203-5505-4882-99dd-52ebd3e120f7 | title: 'How to query data in an S3 bucket'
sidebar_label: 'Querying data in S3'
slug: /chdb/guides/querying-s3
description: 'Learn how to query data in an S3 bucket with chDB.'
keywords: ['chdb', 's3']
doc_type: 'guide'
A lot of the world's data lives in Amazon S3 buckets.
In this guide, we'll learn how to query that data using chDB.
Setup {#setup}
Let's first create a virtual environment:
bash
python -m venv .venv
source .venv/bin/activate
And now we'll install chDB.
Make sure you have version 2.0.2 or higher:
bash
pip install "chdb>=2.0.2"
And now we're going to install IPython:
bash
pip install ipython
We're going to use
ipython
to run the commands in the rest of the guide, which you can launch by running:
bash
ipython
You can also use the code in a Python script or in your favorite notebook.
Listing files in an S3 bucket {#listing-files-in-an-s3-bucket}
Let's start by listing all the files in
an S3 bucket that contains Amazon reviews
.
To do this, we can use the
s3
table function
and pass in the path to a file or a wildcard to a set of files.
:::tip
If you pass just the bucket name it will throw an exception.
:::
We're also going to use the
One
input format so that the file isn't parsed, instead a single row is returned per file and we can access the file via the
_file
virtual column and the path via the
_path
virtual column.
```python
import chdb
chdb.query("""
SELECT
_file,
_path
FROM s3('s3://datasets-documentation/amazon_reviews/*.parquet', One)
SETTINGS output_format_pretty_row_numbers=0
""", 'PrettyCompact')
```
text
┌─_file───────────────────────────────┬─_path─────────────────────────────────────────────────────────────────────┐
│ amazon_reviews_2010.snappy.parquet │ datasets-documentation/amazon_reviews/amazon_reviews_2010.snappy.parquet │
│ amazon_reviews_1990s.snappy.parquet │ datasets-documentation/amazon_reviews/amazon_reviews_1990s.snappy.parquet │
│ amazon_reviews_2013.snappy.parquet │ datasets-documentation/amazon_reviews/amazon_reviews_2013.snappy.parquet │
│ amazon_reviews_2015.snappy.parquet │ datasets-documentation/amazon_reviews/amazon_reviews_2015.snappy.parquet │
│ amazon_reviews_2014.snappy.parquet │ datasets-documentation/amazon_reviews/amazon_reviews_2014.snappy.parquet │
│ amazon_reviews_2012.snappy.parquet │ datasets-documentation/amazon_reviews/amazon_reviews_2012.snappy.parquet │
│ amazon_reviews_2000s.snappy.parquet │ datasets-documentation/amazon_reviews/amazon_reviews_2000s.snappy.parquet │
│ amazon_reviews_2011.snappy.parquet │ datasets-documentation/amazon_reviews/amazon_reviews_2011.snappy.parquet │
└─────────────────────────────────────┴───────────────────────────────────────────────────────────────────────────┘
This bucket contains only Parquet files.
Querying files in an S3 bucket {#querying-files-in-an-s3-bucket} | {"source_file": "querying-s3-bucket.md"} | [
0.018933339044451714,
-0.023924481123685837,
-0.03869464620947838,
-0.01446987222880125,
0.015635831281542778,
0.040517959743738174,
-0.0010323749156668782,
0.04845491051673889,
-0.014579413458704948,
-0.00954616628587246,
-0.023415198549628258,
-0.017916793003678322,
0.05873917415738106,
... |
1dcc7c1e-864d-4b0e-ab29-edfa6d109016 | This bucket contains only Parquet files.
Querying files in an S3 bucket {#querying-files-in-an-s3-bucket}
Next, let's learn how to query those files.
If we want to count the number of rows in each of those files, we can run the following query:
python
chdb.query("""
SELECT
_file,
count() AS count,
formatReadableQuantity(count) AS readableCount
FROM s3('s3://datasets-documentation/amazon_reviews/*.parquet')
GROUP BY ALL
SETTINGS output_format_pretty_row_numbers=0
""", 'PrettyCompact')
text
┌─_file───────────────────────────────┬────count─┬─readableCount───┐
│ amazon_reviews_2013.snappy.parquet │ 28034255 │ 28.03 million │
│ amazon_reviews_1990s.snappy.parquet │ 639532 │ 639.53 thousand │
│ amazon_reviews_2011.snappy.parquet │ 6112495 │ 6.11 million │
│ amazon_reviews_2015.snappy.parquet │ 41905631 │ 41.91 million │
│ amazon_reviews_2012.snappy.parquet │ 11541011 │ 11.54 million │
│ amazon_reviews_2000s.snappy.parquet │ 14728295 │ 14.73 million │
│ amazon_reviews_2014.snappy.parquet │ 44127569 │ 44.13 million │
│ amazon_reviews_2010.snappy.parquet │ 3868472 │ 3.87 million │
└─────────────────────────────────────┴──────────┴─────────────────┘
We can also pass in the HTTP URI for an S3 bucket and will get the same results:
python
chdb.query("""
SELECT
_file,
count() AS count,
formatReadableQuantity(count) AS readableCount
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/amazon_reviews/*.parquet')
GROUP BY ALL
SETTINGS output_format_pretty_row_numbers=0
""", 'PrettyCompact')
Let's have a look at the schema of these Parquet files using the
DESCRIBE
clause:
python
chdb.query("""
DESCRIBE s3('s3://datasets-documentation/amazon_reviews/*.parquet')
SETTINGS describe_compact_output=1
""", 'PrettyCompact')
text
┌─name──────────────┬─type─────────────┐
1. │ review_date │ Nullable(UInt16) │
2. │ marketplace │ Nullable(String) │
3. │ customer_id │ Nullable(UInt64) │
4. │ review_id │ Nullable(String) │
5. │ product_id │ Nullable(String) │
6. │ product_parent │ Nullable(UInt64) │
7. │ product_title │ Nullable(String) │
8. │ product_category │ Nullable(String) │
9. │ star_rating │ Nullable(UInt8) │
10. │ helpful_votes │ Nullable(UInt32) │
11. │ total_votes │ Nullable(UInt32) │
12. │ vine │ Nullable(Bool) │
13. │ verified_purchase │ Nullable(Bool) │
14. │ review_headline │ Nullable(String) │
15. │ review_body │ Nullable(String) │
└───────────────────┴──────────────────┘
Let's now compute the top product categories based on number of reviews, as well as computing the average star rating:
python
chdb.query("""
SELECT product_category, count() AS reviews, round(avg(star_rating), 2) as avg
FROM s3('s3://datasets-documentation/amazon_reviews/*.parquet')
GROUP BY ALL
LIMIT 10
""", 'PrettyCompact') | {"source_file": "querying-s3-bucket.md"} | [
0.011842343024909496,
-0.06208137050271034,
-0.07152571529150009,
-0.01954992674291134,
-0.03413648158311844,
0.03244473412632942,
0.02203674241900444,
0.008796862326562405,
0.0735364705324173,
0.01254919171333313,
0.031244000419974327,
-0.004716846160590649,
0.05566652491688728,
-0.104836... |
bb243ea0-8468-4bcd-b13d-abb4286d7dde | text
┌─product_category─┬──reviews─┬──avg─┐
1. │ Toys │ 4864056 │ 4.21 │
2. │ Apparel │ 5906085 │ 4.11 │
3. │ Luggage │ 348644 │ 4.22 │
4. │ Kitchen │ 4880297 │ 4.21 │
5. │ Books │ 19530930 │ 4.34 │
6. │ Outdoors │ 2302327 │ 4.24 │
7. │ Video │ 380596 │ 4.19 │
8. │ Grocery │ 2402365 │ 4.31 │
9. │ Shoes │ 4366757 │ 4.24 │
10. │ Jewelry │ 1767667 │ 4.14 │
└──────────────────┴──────────┴──────┘
Querying files in a private S3 bucket {#querying-files-in-a-private-s3-bucket}
If we're querying files in a private S3 bucket, we need to pass in an access key and secret.
We can pass in those credentials to the
s3
table function:
python
chdb.query("""
SELECT product_category, count() AS reviews, round(avg(star_rating), 2) as avg
FROM s3('s3://datasets-documentation/amazon_reviews/*.parquet', 'access-key', 'secret')
GROUP BY ALL
LIMIT 10
""", 'PrettyCompact')
:::note
This query won't work because it's a public bucket!
:::
An alternative way is to used
named collections
, but this approach isn't yet supported by chDB. | {"source_file": "querying-s3-bucket.md"} | [
-0.03752368688583374,
-0.009050305932760239,
-0.10278402268886566,
0.042091723531484604,
-0.008657866157591343,
0.050913140177726746,
0.07862725108861923,
-0.004299269523471594,
0.03815790265798569,
-0.026932988315820694,
0.06636977940797806,
-0.04302838072180748,
0.12213572859764099,
-0.1... |
d021113d-9e75-4917-8290-43f70ac963a1 | title: 'Using a clickhouse-local database'
sidebar_label: 'Using clickhouse-local database'
slug: /chdb/guides/clickhouse-local
description: 'Learn how to use a clickhouse-local database with chDB'
keywords: ['chdb', 'clickhouse-local']
doc_type: 'guide'
clickhouse-local
is a CLI with an embedded version of ClickHouse.
It gives users the power of ClickHouse without having to install a server.
In this guide, we will learn how to use a clickhouse-local database from chDB.
Setup {#setup}
Let's first create a virtual environment:
bash
python -m venv .venv
source .venv/bin/activate
And now we'll install chDB.
Make sure you have version 2.0.2 or higher:
bash
pip install "chdb>=2.0.2"
And now we're going to install
ipython
:
bash
pip install ipython
We're going to use
ipython
to run the commands in the rest of the guide, which you can launch by running:
bash
ipython
Installing clickhouse-local {#installing-clickhouse-local}
Downloading and installing clickhouse-local is the same as
downloading and installing ClickHouse
.
We can do this by running the following command:
bash
curl https://clickhouse.com/ | sh
To launch clickhouse-local with the data being persisted to a directory, we need to pass in a
--path
:
bash
./clickhouse -m --path demo.chdb
Ingesting data into clickhouse-local {#ingesting-data-into-clickhouse-local}
The default database only stores data in memory, so we'll need to create a named database to make sure any data we ingest is persisted to disk.
sql
CREATE DATABASE foo;
Let's create a table and insert some random numbers:
sql
CREATE TABLE foo.randomNumbers
ORDER BY number AS
SELECT rand() AS number
FROM numbers(10_000_000);
Let's write a query to see what data we've got:
```sql
SELECT quantilesExact(0, 0.5, 0.75, 0.99)(number) AS quants
FROM foo.randomNumbers
┌─quants────────────────────────────────┐
│ [69,2147776478,3221525118,4252096960] │
└───────────────────────────────────────┘
```
Once you've done that, make sure you
exit;
from the CLI as only one process can hold a lock on this directory.
If we don't do that, we'll get the following error when we try to connect to the database from chDB:
text
ChdbError: Code: 76. DB::Exception: Cannot lock file demo.chdb/status. Another server instance in same directory is already running. (CANNOT_OPEN_FILE)
Connecting to a clickhouse-local database {#connecting-to-a-clickhouse-local-database}
Go back to the
ipython
shell and import the
session
module from chDB:
python
from chdb import session as chs
Initialize a session pointing to
demo..chdb
:
python
sess = chs.Session("demo.chdb")
We can then run the same query that returns the quantiles of numbers:
```python
sess.query("""
SELECT quantilesExact(0, 0.5, 0.75, 0.99)(number) AS quants
FROM foo.randomNumbers
""", "Vertical")
Row 1:
──────
quants: [0,9976599,2147776478,4209286886]
```
We can also insert data into this database from chDB: | {"source_file": "clickhouse-local.md"} | [
0.0352361686527729,
-0.053131114691495895,
-0.06035826355218887,
0.007448786403983831,
-0.04679182916879654,
-0.012627821415662766,
0.024111395701766014,
0.030144965276122093,
-0.06795059889554977,
-0.05491790920495987,
0.021207671612501144,
-0.03408341109752655,
0.06365358084440231,
0.001... |
e2ca3c0d-4942-43b2-987a-ebd75cebfd28 | Row 1:
──────
quants: [0,9976599,2147776478,4209286886]
```
We can also insert data into this database from chDB:
```python
sess.query("""
INSERT INTO foo.randomNumbers
SELECT rand() AS number FROM numbers(10_000_000)
""")
Row 1:
──────
quants: [0,9976599,2147776478,4209286886]
```
We can then re-run the quantiles query from chDB or clickhouse-local. | {"source_file": "clickhouse-local.md"} | [
-0.03550894185900688,
-0.05308772251009941,
-0.058891549706459045,
0.026924272999167442,
-0.08975318819284439,
-0.05051948502659798,
0.07167791575193405,
-0.05899641290307045,
-0.00826337281614542,
0.0029046491254121065,
0.028008082881569862,
-0.009480717591941357,
0.010615105740725994,
-0... |
4d20599b-65e4-4e2e-9963-56069ff83a22 | title: 'How to query Pandas DataFrames with chDB'
sidebar_label: 'Querying Pandas'
slug: /chdb/guides/pandas
description: 'Learn how to query Pandas DataFrames with chDB'
keywords: ['chDB', 'Pandas']
show_related_blogs: true
doc_type: 'guide'
Pandas
is a popular library for data manipulation and analysis in Python.
In version 2 of chDB, we've improved the performance of querying Pandas DataFrames and introduced the
Python
table function.
In this guide, we will learn how to query Pandas using the
Python
table function.
Setup {#setup}
Let's first create a virtual environment:
bash
python -m venv .venv
source .venv/bin/activate
And now we'll install chDB.
Make sure you have version 2.0.2 or higher:
bash
pip install "chdb>=2.0.2"
And now we're going to install Pandas and a couple of other libraries:
bash
pip install pandas requests ipython
We're going to use
ipython
to run the commands in the rest of the guide, which you can launch by running:
bash
ipython
You can also use the code in a Python script or in your favorite notebook.
Creating a Pandas DataFrame from a URL {#creating-a-pandas-dataframe-from-a-url}
We're going to query some data from the
StatsBomb GitHub repository
.
Let's first import requests and pandas:
python
import requests
import pandas as pd
Then, we'll load one of the matches JSON files into a DataFrame:
python
response = requests.get(
"https://raw.githubusercontent.com/statsbomb/open-data/master/data/matches/223/282.json"
)
matches_df = pd.json_normalize(response.json(), sep='_')
Let's have a look what data we'll be working with:
python
matches_df.iloc[0] | {"source_file": "querying-pandas.md"} | [
0.04186321422457695,
-0.015455021522939205,
-0.04984642192721367,
0.010615071281790733,
0.021448681131005287,
-0.0022999129723757505,
0.0032920758239924908,
0.026585744693875313,
-0.03942481428384781,
0.020618004724383354,
0.027213910594582558,
-0.017952188849449158,
-0.015427613630890846,
... |
a682bf96-f4b5-4d6d-8695-00d7d1ea49ab | text
match_id 3943077
match_date 2024-07-15
kick_off 04:15:00.000
home_score 1
away_score 0
match_status available
match_status_360 unscheduled
last_updated 2024-07-15T15:50:08.671355
last_updated_360 None
match_week 6
competition_competition_id 223
competition_country_name South America
competition_competition_name Copa America
season_season_id 282
season_season_name 2024
home_team_home_team_id 779
home_team_home_team_name Argentina
home_team_home_team_gender male
home_team_home_team_group None
home_team_country_id 11
home_team_country_name Argentina
home_team_managers [{'id': 5677, 'name': 'Lionel Sebastián Scalon...
away_team_away_team_id 769
away_team_away_team_name Colombia
away_team_away_team_gender male
away_team_away_team_group None
away_team_country_id 49
away_team_country_name Colombia
away_team_managers [{'id': 5905, 'name': 'Néstor Gabriel Lorenzo'...
metadata_data_version 1.1.0
metadata_shot_fidelity_version 2
metadata_xy_fidelity_version 2
competition_stage_id 26
competition_stage_name Final
stadium_id 5337
stadium_name Hard Rock Stadium | {"source_file": "querying-pandas.md"} | [
-0.0018334176857024431,
0.024311844259500504,
-0.031154463067650795,
0.023361075669527054,
0.10105183720588684,
0.07723439484834671,
-0.001394152408465743,
0.029124025255441666,
0.009207912720739841,
0.0432761088013649,
-0.029642652720212936,
-0.057821959257125854,
-0.005364251788705587,
0... |
326c8fc2-2af1-43d0-99d0-ff5b5b03a9f6 | stadium_id 5337
stadium_name Hard Rock Stadium
stadium_country_id 241
stadium_country_name United States of America
referee_id 2638
referee_name Raphael Claus
referee_country_id 31
referee_country_name Brazil
Name: 0, dtype: object | {"source_file": "querying-pandas.md"} | [
0.0050239404663443565,
-0.03066333197057247,
-0.036242011934518814,
-0.02795487642288208,
0.06504781544208527,
0.04818614572286606,
0.031852468848228455,
0.038123443722724915,
-0.028164776042103767,
0.08684275299310684,
-0.09353179484605789,
-0.16972899436950684,
-0.03526138514280319,
0.00... |
d74432a0-4d09-4cd7-871a-765160cef840 | Next, we'll load one of the events JSON files and also add a column called
match_id
to that DataFrame:
python
response = requests.get(
"https://raw.githubusercontent.com/statsbomb/open-data/master/data/events/3943077.json"
)
events_df = pd.json_normalize(response.json(), sep='_')
events_df["match_id"] = 3943077
And again, let's have a look at the first row:
python
with pd.option_context("display.max_rows", None):
first_row = events_df.iloc[0]
non_nan_columns = first_row[first_row.notna()].T
display(non_nan_columns)
text
id 279b7d66-92b5-4daa-8ff6-cba8fce271d9
index 1
period 1
timestamp 00:00:00.000
minute 0
second 0
possession 1
duration 0.0
type_id 35
type_name Starting XI
possession_team_id 779
possession_team_name Argentina
play_pattern_id 1
play_pattern_name Regular Play
team_id 779
team_name Argentina
tactics_formation 442.0
tactics_lineup [{'player': {'id': 6909, 'name': 'Damián Emili...
match_id 3943077
Name: 0, dtype: object
Querying Pandas DataFrames {#querying-pandas-dataframes}
Next, let's see how to query these DataFrames using chDB.
We'll import the library:
python
import chdb
We can query Pandas DataFrames by using the
Python
table function:
sql
SELECT *
FROM Python(<name-of-variable>)
So, if we wanted to list the columns in
matches_df
, we could write the following:
python
chdb.query("""
DESCRIBE Python(matches_df)
SETTINGS describe_compact_output=1
""", "DataFrame") | {"source_file": "querying-pandas.md"} | [
-0.04543238878250122,
0.0206715427339077,
-0.017350846901535988,
-0.0028348856139928102,
0.02254599891602993,
-0.01973695307970047,
-0.038650721311569214,
0.014011693187057972,
0.07557202875614166,
-0.021564314141869545,
0.025528185069561005,
-0.018490279093384743,
-0.07968607544898987,
0.... |
8ad397e8-57cc-4666-91e5-0a3362db34c8 | So, if we wanted to list the columns in
matches_df
, we could write the following:
python
chdb.query("""
DESCRIBE Python(matches_df)
SETTINGS describe_compact_output=1
""", "DataFrame")
text
name type
0 match_id Int64
1 match_date String
2 kick_off String
3 home_score Int64
4 away_score Int64
5 match_status String
6 match_status_360 String
7 last_updated String
8 last_updated_360 String
9 match_week Int64
10 competition_competition_id Int64
11 competition_country_name String
12 competition_competition_name String
13 season_season_id Int64
14 season_season_name String
15 home_team_home_team_id Int64
16 home_team_home_team_name String
17 home_team_home_team_gender String
18 home_team_home_team_group String
19 home_team_country_id Int64
20 home_team_country_name String
21 home_team_managers String
22 away_team_away_team_id Int64
23 away_team_away_team_name String
24 away_team_away_team_gender String
25 away_team_away_team_group String
26 away_team_country_id Int64
27 away_team_country_name String
28 away_team_managers String
29 metadata_data_version String
30 metadata_shot_fidelity_version String
31 metadata_xy_fidelity_version String
32 competition_stage_id Int64
33 competition_stage_name String
34 stadium_id Int64
35 stadium_name String
36 stadium_country_id Int64
37 stadium_country_name String
38 referee_id Int64
39 referee_name String
40 referee_country_id Int64
41 referee_country_name String
We could then find out which referees have officiated more than one match by writing the following query:
python
chdb.query("""
SELECT referee_name, count() AS count
FROM Python(matches_df)
GROUP BY ALL
HAVING count > 1
ORDER BY count DESC
""", "DataFrame")
text
referee_name count
0 César Arturo Ramos Palazuelos 3
1 Maurizio Mariani 3
2 Piero Maza Gomez 3
3 Mario Alberto Escobar Toca 2
4 Wilmar Alexander Roldán Pérez 2
5 Jesús Valenzuela Sáez 2
6 Wilton Pereira Sampaio 2
7 Darío Herrera 2
8 Andrés Matonte 2
9 Raphael Claus 2
Now, let's explore
events_df
.
python
chdb.query("""
SELECT pass_recipient_name, count()
FROM Python(events_df)
WHERE type_name = 'Pass' AND pass_recipient_name <> ''
GROUP BY ALL
ORDER BY count() DESC
LIMIT 10
""", "DataFrame") | {"source_file": "querying-pandas.md"} | [
0.06849438697099686,
0.0009763442212715745,
-0.08921486139297485,
0.05447717383503914,
0.08959339559078217,
0.06289481371641159,
-0.017387468367815018,
0.02456788346171379,
-0.07135128974914551,
0.05080785974860191,
-0.019160913303494453,
-0.020251208916306496,
-0.02179577760398388,
-0.000... |
be128842-cb80-4022-9539-2ddbcd752580 | python
chdb.query("""
SELECT pass_recipient_name, count()
FROM Python(events_df)
WHERE type_name = 'Pass' AND pass_recipient_name <> ''
GROUP BY ALL
ORDER BY count() DESC
LIMIT 10
""", "DataFrame")
text
pass_recipient_name count()
0 Davinson Sánchez Mina 76
1 Ángel Fabián Di María Hernández 64
2 Alexis Mac Allister 62
3 Enzo Fernandez 57
4 James David Rodríguez Rubio 56
5 Johan Andrés Mojica Palacio 55
6 Rodrigo Javier De Paul 54
7 Jefferson Andrés Lerma Solís 53
8 Jhon Adolfo Arias Andrade 52
9 Carlos Eccehomo Cuesta Figueroa 50
Joining Pandas DataFrames {#joining-pandas-dataframes}
We can also join DataFrames together in a query.
For example, to get an overview of the match, we could write the following query:
python
chdb.query("""
SELECT home_team_home_team_name, away_team_away_team_name, home_score, away_score,
countIf(type_name = 'Pass' AND possession_team_id=home_team_home_team_id) AS home_passes,
countIf(type_name = 'Pass' AND possession_team_id=away_team_away_team_id) AS away_passes,
countIf(type_name = 'Shot' AND possession_team_id=home_team_home_team_id) AS home_shots,
countIf(type_name = 'Shot' AND possession_team_id=away_team_away_team_id) AS away_shots
FROM Python(matches_df) AS matches
JOIN Python(events_df) AS events ON events.match_id = matches.match_id
GROUP BY ALL
LIMIT 5
""", "DataFrame").iloc[0]
text
home_team_home_team_name Argentina
away_team_away_team_name Colombia
home_score 1
away_score 0
home_passes 527
away_passes 669
home_shots 11
away_shots 19
Name: 0, dtype: object
Populating a table from a DataFrame {#populating-a-table-from-a-dataframe}
We can also create and populate ClickHouse tables from DataFrames.
If we want to create a table in chDB, we need to use the Stateful Session API.
Let's import the session module:
python
from chdb import session as chs
Initialize a session:
python
sess = chs.Session()
Next, we'll create a database:
python
sess.query("CREATE DATABASE statsbomb")
Then, create an
events
table based on
events_df
:
python
sess.query("""
CREATE TABLE statsbomb.events ORDER BY id AS
SELECT *
FROM Python(events_df)
""")
We can then run the query that returns the top pass recipient:
python
sess.query("""
SELECT pass_recipient_name, count()
FROM statsbomb.events
WHERE type_name = 'Pass' AND pass_recipient_name <> ''
GROUP BY ALL
ORDER BY count() DESC
LIMIT 10
""", "DataFrame") | {"source_file": "querying-pandas.md"} | [
0.07932469248771667,
0.011083784513175488,
-0.016287827864289284,
0.030492037534713745,
0.026120994240045547,
0.05697734281420708,
0.06158733740448952,
0.024227818474173546,
0.08099104464054108,
-0.02833743393421173,
0.02115618623793125,
-0.03063058853149414,
-0.0286566074937582,
0.0080844... |
cb691296-bfb3-4540-aa83-e50595d9f600 | python
sess.query("""
SELECT pass_recipient_name, count()
FROM statsbomb.events
WHERE type_name = 'Pass' AND pass_recipient_name <> ''
GROUP BY ALL
ORDER BY count() DESC
LIMIT 10
""", "DataFrame")
text
pass_recipient_name count()
0 Davinson Sánchez Mina 76
1 Ángel Fabián Di María Hernández 64
2 Alexis Mac Allister 62
3 Enzo Fernandez 57
4 James David Rodríguez Rubio 56
5 Johan Andrés Mojica Palacio 55
6 Rodrigo Javier De Paul 54
7 Jefferson Andrés Lerma Solís 53
8 Jhon Adolfo Arias Andrade 52
9 Carlos Eccehomo Cuesta Figueroa 50
Joining a Pandas DataFrame and table {#joining-a-pandas-dataframe-and-table}
Finally, we can also update our join query to join the
matches_df
DataFrame with the
statsbomb.events
table:
python
sess.query("""
SELECT home_team_home_team_name, away_team_away_team_name, home_score, away_score,
countIf(type_name = 'Pass' AND possession_team_id=home_team_home_team_id) AS home_passes,
countIf(type_name = 'Pass' AND possession_team_id=away_team_away_team_id) AS away_passes,
countIf(type_name = 'Shot' AND possession_team_id=home_team_home_team_id) AS home_shots,
countIf(type_name = 'Shot' AND possession_team_id=away_team_away_team_id) AS away_shots
FROM Python(matches_df) AS matches
JOIN statsbomb.events AS events ON events.match_id = matches.match_id
GROUP BY ALL
LIMIT 5
""", "DataFrame").iloc[0]
text
home_team_home_team_name Argentina
away_team_away_team_name Colombia
home_score 1
away_score 0
home_passes 527
away_passes 669
home_shots 11
away_shots 19
Name: 0, dtype: object | {"source_file": "querying-pandas.md"} | [
0.04230521619319916,
0.005631273612380028,
-0.006532798521220684,
0.022442234680056572,
0.04511189088225365,
0.02959553152322769,
0.09674977511167526,
0.03683103621006012,
0.08037254214286804,
0.0032858646009117365,
0.02919168956577778,
-0.02135358937084675,
-0.001113413949497044,
-0.03032... |
884ae05c-1b15-4a65-a7e6-bda506e66215 | title: 'JupySQL and chDB'
sidebar_label: 'JupySQL'
slug: /chdb/guides/jupysql
description: 'How to install chDB for Bun'
keywords: ['chdb', 'JupySQL']
doc_type: 'guide'
import Image from '@theme/IdealImage';
import PlayersPerRank from '@site/static/images/chdb/guides/players_per_rank.png';
JupySQL
is a Python library that lets you run SQL in Jupyter notebooks and the IPython shell.
In this guide, we're going to learn how to query data using chDB and JupySQL.
Setup {#setup}
Let's first create a virtual environment:
bash
python -m venv .venv
source .venv/bin/activate
And then, we'll install JupySQL, IPython, and Jupyter Lab:
bash
pip install jupysql ipython jupyterlab
We can use JupySQL in IPython, which we can launch by running:
bash
ipython
Or in Jupyter Lab, by running:
bash
jupyter lab
:::note
If you're using Jupyter Lab, you'll need to create a notebook before following the rest of the guide.
:::
Downloading a dataset {#downloading-a-dataset}
We're going to use one of
Jeff Sackmann's tennis_atp
dataset, which contains metadata about players and their rankings over time.
Let's start by downloading the rankings files:
python
from urllib.request import urlretrieve
python
files = ['00s', '10s', '20s', '70s', '80s', '90s', 'current']
base = "https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master"
for file in files:
_ = urlretrieve(
f"{base}/atp_rankings_{file}.csv",
f"atp_rankings_{file}.csv",
)
Configuring chDB and JupySQL {#configuring-chdb-and-jupysql}
Next, let's import the
dbapi
module for chDB:
python
from chdb import dbapi
And we'll create a chDB connection.
Any data that we persist will be saved to the
atp.chdb
directory:
python
conn = dbapi.connect(path="atp.chdb")
Let's now load the
sql
magic and create a connection to chDB:
python
%load_ext sql
%sql conn --alias chdb
Next, we'll display the display limit so that results of queries won't be truncated:
python
%config SqlMagic.displaylimit = None
Querying data in CSV files {#querying-data-in-csv-files}
We've downloaded a bunch of files with the
atp_rankings
prefix.
Let's use the
DESCRIBE
clause to understand the schema:
python
%%sql
DESCRIBE file('atp_rankings*.csv')
SETTINGS describe_compact_output=1,
schema_inference_make_columns_nullable=0
text
+--------------+-------+
| name | type |
+--------------+-------+
| ranking_date | Int64 |
| rank | Int64 |
| player | Int64 |
| points | Int64 |
+--------------+-------+
We can also write a
SELECT
query directly against these files to see what the data looks like:
python
%sql SELECT * FROM file('atp_rankings*.csv') LIMIT 1
text
+--------------+------+--------+--------+
| ranking_date | rank | player | points |
+--------------+------+--------+--------+
| 20000110 | 1 | 101736 | 4135 |
+--------------+------+--------+--------+ | {"source_file": "jupysql.md"} | [
0.04657254368066788,
0.029515858739614487,
-0.011848226189613342,
0.011684027500450611,
-0.10402685403823853,
0.08700084686279297,
0.035450052469968796,
0.10638154298067093,
-0.05490923300385475,
0.013124783523380756,
0.006236197892576456,
-0.018536163493990898,
0.13706663250923157,
-0.043... |
b1d2d4e1-e692-418e-b671-41c253bcf213 | The format of the data is a bit weird.
Let's clean that date up and use the
REPLACE
clause to return the cleaned up
ranking_date
:
python
%%sql
SELECT * REPLACE (
toDate(parseDateTime32BestEffort(toString(ranking_date))) AS ranking_date
)
FROM file('atp_rankings*.csv')
LIMIT 10
SETTINGS schema_inference_make_columns_nullable=0
text
+--------------+------+--------+--------+
| ranking_date | rank | player | points |
+--------------+------+--------+--------+
| 2000-01-10 | 1 | 101736 | 4135 |
| 2000-01-10 | 2 | 102338 | 2915 |
| 2000-01-10 | 3 | 101948 | 2419 |
| 2000-01-10 | 4 | 103017 | 2184 |
| 2000-01-10 | 5 | 102856 | 2169 |
| 2000-01-10 | 6 | 102358 | 2107 |
| 2000-01-10 | 7 | 102839 | 1966 |
| 2000-01-10 | 8 | 101774 | 1929 |
| 2000-01-10 | 9 | 102701 | 1846 |
| 2000-01-10 | 10 | 101990 | 1739 |
+--------------+------+--------+--------+
Importing CSV files into chDB {#importing-csv-files-into-chdb}
Now we're going to store the data from these CSV files in a table.
The default database doesn't persist data on disk, so we need to create another database first:
python
%sql CREATE DATABASE atp
And now we're going to create a table called
rankings
whose schema will be derived from the structure of the data in the CSV files:
python
%%sql
CREATE TABLE atp.rankings
ENGINE=MergeTree
ORDER BY ranking_date AS
SELECT * REPLACE (
toDate(parseDateTime32BestEffort(toString(ranking_date))) AS ranking_date
)
FROM file('atp_rankings*.csv')
SETTINGS schema_inference_make_columns_nullable=0
Let's do a quick check on the data in our table:
python
%sql SELECT * FROM atp.rankings LIMIT 10
text
+--------------+------+--------+--------+
| ranking_date | rank | player | points |
+--------------+------+--------+--------+
| 2000-01-10 | 1 | 101736 | 4135 |
| 2000-01-10 | 2 | 102338 | 2915 |
| 2000-01-10 | 3 | 101948 | 2419 |
| 2000-01-10 | 4 | 103017 | 2184 |
| 2000-01-10 | 5 | 102856 | 2169 |
| 2000-01-10 | 6 | 102358 | 2107 |
| 2000-01-10 | 7 | 102839 | 1966 |
| 2000-01-10 | 8 | 101774 | 1929 |
| 2000-01-10 | 9 | 102701 | 1846 |
| 2000-01-10 | 10 | 101990 | 1739 |
+--------------+------+--------+--------+
Looks good - the output, as expected, is the same as when querying the CSV files directly.
We're going to follow the same process for the player metadata.
This time the data is all in a single CSV file, so let's download that file:
python
_ = urlretrieve(
f"{base}/atp_players.csv",
"atp_players.csv",
)
And then create a table called
players
based on the content of the CSV file.
We'll also clean up the
dob
field so that its a
Date32
type.
In ClickHouse, the
Date
type only supports dates from 1970 onwards. Since the
dob
column contains dates from before 1970, we'll use the
Date32
type instead. | {"source_file": "jupysql.md"} | [
0.040098659694194794,
0.010263608768582344,
0.02233918197453022,
0.014200651086866856,
-0.03199480473995209,
0.061053626239299774,
0.00759892025962472,
0.037632204592227936,
-0.06450992077589035,
0.05667435750365257,
0.012125914916396141,
-0.033753521740436554,
0.01900850608944893,
-0.0173... |
f3c449bd-2693-409e-a5f7-f5f2b92f21ce | In ClickHouse, the
Date
type only supports dates from 1970 onwards. Since the
dob
column contains dates from before 1970, we'll use the
Date32
type instead.
python
%%sql
CREATE TABLE atp.players
Engine=MergeTree
ORDER BY player_id AS
SELECT * REPLACE (
makeDate32(
toInt32OrNull(substring(toString(dob), 1, 4)),
toInt32OrNull(substring(toString(dob), 5, 2)),
toInt32OrNull(substring(toString(dob), 7, 2))
)::Nullable(Date32) AS dob
)
FROM file('atp_players.csv')
SETTINGS schema_inference_make_columns_nullable=0
Once that's finished running, we can have a look at the data we've ingested:
python
%sql SELECT * FROM atp.players LIMIT 10
text
+-----------+------------+-----------+------+------------+-----+--------+-------------+
| player_id | name_first | name_last | hand | dob | ioc | height | wikidata_id |
+-----------+------------+-----------+------+------------+-----+--------+-------------+
| 100001 | Gardnar | Mulloy | R | 1913-11-22 | USA | 185 | Q54544 |
| 100002 | Pancho | Segura | R | 1921-06-20 | ECU | 168 | Q54581 |
| 100003 | Frank | Sedgman | R | 1927-10-02 | AUS | 180 | Q962049 |
| 100004 | Giuseppe | Merlo | R | 1927-10-11 | ITA | 0 | Q1258752 |
| 100005 | Richard | Gonzalez | R | 1928-05-09 | USA | 188 | Q53554 |
| 100006 | Grant | Golden | R | 1929-08-21 | USA | 175 | Q3115390 |
| 100007 | Abe | Segal | L | 1930-10-23 | RSA | 0 | Q1258527 |
| 100008 | Kurt | Nielsen | R | 1930-11-19 | DEN | 0 | Q552261 |
| 100009 | Istvan | Gulyas | R | 1931-10-14 | HUN | 0 | Q51066 |
| 100010 | Luis | Ayala | R | 1932-09-18 | CHI | 170 | Q1275397 |
+-----------+------------+-----------+------+------------+-----+--------+-------------+
Querying chDB {#querying-chdb}
Data ingestion is done, now it's time for the fun part - querying the data!
Tennis players receive points based on how well they perform in the tournaments they play.
The points for each player over a 52 week rolling period.
We're going to write a query that finds the maximum points accumulate by each player along with their ranking at the time:
python
%%sql
SELECT name_first, name_last,
max(points) as maxPoints,
argMax(rank, points) as rank,
argMax(ranking_date, points) as date
FROM atp.players
JOIN atp.rankings ON rankings.player = players.player_id
GROUP BY ALL
ORDER BY maxPoints DESC
LIMIT 10 | {"source_file": "jupysql.md"} | [
0.04793853312730789,
0.018167292699217796,
-0.038216106593608856,
-0.009546241723001003,
-0.06914134323596954,
0.06375116109848022,
-0.02802123688161373,
0.05749181658029556,
-0.04481814429163933,
0.03700531646609306,
0.0402684323489666,
-0.029794715344905853,
-0.010653619654476643,
0.0050... |
cb602383-f730-4dfe-8124-a91187489145 | text
+------------+-----------+-----------+------+------------+
| name_first | name_last | maxPoints | rank | date |
+------------+-----------+-----------+------+------------+
| Novak | Djokovic | 16950 | 1 | 2016-06-06 |
| Rafael | Nadal | 15390 | 1 | 2009-04-20 |
| Andy | Murray | 12685 | 1 | 2016-11-21 |
| Roger | Federer | 12315 | 1 | 2012-10-29 |
| Daniil | Medvedev | 10780 | 2 | 2021-09-13 |
| Carlos | Alcaraz | 9815 | 1 | 2023-08-21 |
| Dominic | Thiem | 9125 | 3 | 2021-01-18 |
| Jannik | Sinner | 8860 | 2 | 2024-05-06 |
| Stefanos | Tsitsipas | 8350 | 3 | 2021-09-20 |
| Alexander | Zverev | 8240 | 4 | 2021-08-23 |
+------------+-----------+-----------+------+------------+
It's quite interesting that some of the players in this list accumulated a lot of points without being number 1 with that points total.
Saving queries {#saving-queries}
We can save queries using the
--save
parameter on the same line as the
%%sql
magic.
The
--no-execute
parameter means that query execution will be skipped.
python
%%sql --save best_points --no-execute
SELECT name_first, name_last,
max(points) as maxPoints,
argMax(rank, points) as rank,
argMax(ranking_date, points) as date
FROM atp.players
JOIN atp.rankings ON rankings.player = players.player_id
GROUP BY ALL
ORDER BY maxPoints DESC
When we run a saved query it will be converted into a Common Table Expression (CTE) before executing.
In the following query we compute the maximum points achieved by players when they were ranked 1:
python
%sql select * FROM best_points WHERE rank=1
text
+-------------+-----------+-----------+------+------------+
| name_first | name_last | maxPoints | rank | date |
+-------------+-----------+-----------+------+------------+
| Novak | Djokovic | 16950 | 1 | 2016-06-06 |
| Rafael | Nadal | 15390 | 1 | 2009-04-20 |
| Andy | Murray | 12685 | 1 | 2016-11-21 |
| Roger | Federer | 12315 | 1 | 2012-10-29 |
| Carlos | Alcaraz | 9815 | 1 | 2023-08-21 |
| Pete | Sampras | 5792 | 1 | 1997-08-11 |
| Andre | Agassi | 5652 | 1 | 1995-08-21 |
| Lleyton | Hewitt | 5205 | 1 | 2002-08-12 |
| Gustavo | Kuerten | 4750 | 1 | 2001-09-10 |
| Juan Carlos | Ferrero | 4570 | 1 | 2003-10-20 |
| Stefan | Edberg | 3997 | 1 | 1991-02-25 |
| Jim | Courier | 3973 | 1 | 1993-08-23 |
| Ivan | Lendl | 3420 | 1 | 1990-02-26 |
| Ilie | Nastase | 0 | 1 | 1973-08-27 |
+-------------+-----------+-----------+------+------------+
Querying with parameters {#querying-with-parameters}
We can also use parameters in our queries.
Parameters are just normal variables:
python
rank = 10 | {"source_file": "jupysql.md"} | [
-0.033720895648002625,
-0.008062681183218956,
-0.041463255882263184,
-0.03661934658885002,
-0.030965905636548996,
0.10165824741125107,
-0.017448438331484795,
0.06668265908956528,
0.016664890572428703,
0.03557653725147247,
-0.059093065559864044,
-0.06653958559036255,
0.018934134393930435,
0... |
18498125-85f8-48b3-aec3-96979db786d6 | Querying with parameters {#querying-with-parameters}
We can also use parameters in our queries.
Parameters are just normal variables:
python
rank = 10
And then we can use the
{{variable}}
syntax in our query.
The following query finds the players who had the least number of days between when they first had a ranking in the top 10 and last had a ranking in the top 10:
python
%%sql
SELECT name_first, name_last,
MIN(ranking_date) AS earliest_date,
MAX(ranking_date) AS most_recent_date,
most_recent_date - earliest_date AS days,
1 + (days/7) AS weeks
FROM atp.rankings
JOIN atp.players ON players.player_id = rankings.player
WHERE rank <= {{rank}}
GROUP BY ALL
ORDER BY days
LIMIT 10
text
+------------+-----------+---------------+------------------+------+-------+
| name_first | name_last | earliest_date | most_recent_date | days | weeks |
+------------+-----------+---------------+------------------+------+-------+
| Alex | Metreveli | 1974-06-03 | 1974-06-03 | 0 | 1 |
| Mikael | Pernfors | 1986-09-22 | 1986-09-22 | 0 | 1 |
| Felix | Mantilla | 1998-06-08 | 1998-06-08 | 0 | 1 |
| Wojtek | Fibak | 1977-07-25 | 1977-07-25 | 0 | 1 |
| Thierry | Tulasne | 1986-08-04 | 1986-08-04 | 0 | 1 |
| Lucas | Pouille | 2018-03-19 | 2018-03-19 | 0 | 1 |
| John | Alexander | 1975-12-15 | 1975-12-15 | 0 | 1 |
| Nicolas | Massu | 2004-09-13 | 2004-09-20 | 7 | 2 |
| Arnaud | Clement | 2001-04-02 | 2001-04-09 | 7 | 2 |
| Ernests | Gulbis | 2014-06-09 | 2014-06-23 | 14 | 3 |
+------------+-----------+---------------+------------------+------+-------+
Plotting histograms {#plotting-histograms}
JupySQL also has limited charting functionality.
We can create box plots or histograms.
We're going to create a histogram, but first let's write (and save) a query that computes the rankings within the top 100 that each player has achieved.
We'll be able to use this to create a histogram that counts how many players achieved each ranking:
python
%%sql --save players_per_rank --no-execute
select distinct player, rank
FROM atp.rankings
WHERE rank <= 100
We can then create a histogram by running the following:
```python
from sql.ggplot import ggplot, geom_histogram, aes
plot = (
ggplot(
table="players_per_rank",
with_="players_per_rank",
mapping=aes(x="rank", fill="#69f0ae", color="#fff"),
) + geom_histogram(bins=100)
)
``` | {"source_file": "jupysql.md"} | [
0.0035732754040509462,
0.07421789318323135,
0.012018304318189621,
0.013854209333658218,
-0.07204471528530121,
0.09391993284225464,
0.04007435962557793,
0.11401393264532089,
-0.06222229450941086,
-0.00727247865870595,
0.01072159968316555,
0.0016217046650126576,
0.03908735513687134,
0.087192... |
d87c5944-9ba4-47a0-8b34-b8dabbb1201c | title: 'How to query a remote ClickHouse server'
sidebar_label: 'Querying remote ClickHouse'
slug: /chdb/guides/query-remote-clickhouse
description: 'In this guide, we will learn how to query a remote ClickHouse server from chDB.'
keywords: ['chdb', 'clickhouse']
doc_type: 'guide'
In this guide, we're going to learn how to query a remote ClickHouse server from chDB.
Setup {#setup}
Let's first create a virtual environment:
bash
python -m venv .venv
source .venv/bin/activate
And now we'll install chDB.
Make sure you have version 2.0.2 or higher:
bash
pip install "chdb>=2.0.2"
And now we're going to install pandas, and ipython:
bash
pip install pandas ipython
We're going to use
ipython
to run the commands in the rest of the guide, which you can launch by running:
bash
ipython
You can also use the code in a Python script or in your favorite notebook.
An intro to ClickPy {#an-intro-to-clickpy}
The remote ClickHouse server that we're going to query is
ClickPy
.
ClickPy keeps track of all the downloads of PyPI packages and lets you explore the stats of packages via a UI.
The underlying database is available to query using the
play
user.
You can learn more about ClickPy in
its GitHub repository
.
Querying the ClickPy ClickHouse service {#querying-the-clickpy-clickhouse-service}
Let's import chDB:
python
import chdb
We're going to query ClickPy using the
remoteSecure
function.
This function takes in a host name, table name, and username at a minimum.
We can write the following query to return the number of downloads per day of the
openai
package
as a Pandas DataFrame:
```python
query = """
SELECT
toStartOfDay(date)::Date32 AS x,
sum(count) AS y
FROM remoteSecure(
'clickpy-clickhouse.clickhouse.com',
'pypi.pypi_downloads_per_day',
'play'
)
WHERE project = 'openai'
GROUP BY x
ORDER BY x ASC
"""
openai_df = chdb.query(query, "DataFrame")
openai_df.sort_values(by=["x"], ascending=False).head(n=10)
```
text
x y
2392 2024-10-02 1793502
2391 2024-10-01 1924901
2390 2024-09-30 1749045
2389 2024-09-29 1177131
2388 2024-09-28 1157323
2387 2024-09-27 1688094
2386 2024-09-26 1862712
2385 2024-09-25 2032923
2384 2024-09-24 1901965
2383 2024-09-23 1777554
Now let's do the same to return the downloads for
scikit-learn
:
```python
query = """
SELECT
toStartOfDay(date)::Date32 AS x,
sum(count) AS y
FROM remoteSecure(
'clickpy-clickhouse.clickhouse.com',
'pypi.pypi_downloads_per_day',
'play'
)
WHERE project = 'scikit-learn'
GROUP BY x
ORDER BY x ASC
"""
sklearn_df = chdb.query(query, "DataFrame")
sklearn_df.sort_values(by=["x"], ascending=False).head(n=10)
``` | {"source_file": "query-remote-clickhouse.md"} | [
0.04408010095357895,
-0.07261449843645096,
-0.04272715747356415,
-0.008424385450780392,
-0.021765952929854393,
-0.011555133387446404,
-0.000170235987752676,
0.016228966414928436,
-0.047374043613672256,
-0.01750575751066208,
0.009036099538207054,
-0.01687316969037056,
0.006219637580215931,
... |
f8539742-c48e-4eba-9eef-4056d4d204e1 | sklearn_df = chdb.query(query, "DataFrame")
sklearn_df.sort_values(by=["x"], ascending=False).head(n=10)
```
text
x y
2392 2024-10-02 1793502
2391 2024-10-01 1924901
2390 2024-09-30 1749045
2389 2024-09-29 1177131
2388 2024-09-28 1157323
2387 2024-09-27 1688094
2386 2024-09-26 1862712
2385 2024-09-25 2032923
2384 2024-09-24 1901965
2383 2024-09-23 1777554
Merging Pandas DataFrames {#merging-pandas-dataframes}
We now have two DataFrames, which we can merge together based on date (which is the
x
column) like this:
python
df = openai_df.merge(
sklearn_df,
on="x",
suffixes=("_openai", "_sklearn")
)
df.head(n=5)
text
x y_openai y_sklearn
0 2018-02-26 83 33971
1 2018-02-27 31 25211
2 2018-02-28 8 26023
3 2018-03-01 8 20912
4 2018-03-02 5 23842
We can then compute the ratio of Open AI downloads to
scikit-learn
downloads like this:
python
df['ratio'] = df['y_openai'] / df['y_sklearn']
df.head(n=5)
text
x y_openai y_sklearn ratio
0 2018-02-26 83 33971 0.002443
1 2018-02-27 31 25211 0.001230
2 2018-02-28 8 26023 0.000307
3 2018-03-01 8 20912 0.000383
4 2018-03-02 5 23842 0.000210
Querying Pandas DataFrames {#querying-pandas-dataframes}
Next, let's say we want to find the dates with the best and worst ratios.
We can go back to chDB and compute those values:
python
chdb.query("""
SELECT max(ratio) AS bestRatio,
argMax(x, ratio) AS bestDate,
min(ratio) AS worstRatio,
argMin(x, ratio) AS worstDate
FROM Python(df)
""", "DataFrame")
text
bestRatio bestDate worstRatio worstDate
0 0.693855 2024-09-19 0.000003 2020-02-09
If you want to learn more about querying Pandas DataFrames, see the
Pandas DataFrames developer guide
. | {"source_file": "query-remote-clickhouse.md"} | [
-0.011949943378567696,
-0.07623658329248428,
-0.013507985509932041,
-0.04148772731423378,
0.06452292203903198,
-0.001510782865807414,
0.02771463617682457,
0.0035231118090450764,
0.006389624904841185,
-0.01495552621781826,
0.11588557064533234,
0.07720313966274261,
-0.06782549619674683,
-0.0... |
7c38a53e-7ee9-4ff3-afd5-5a3bee65bb03 | title: 'How to query Apache Arrow with chDB'
sidebar_label: 'Querying Apache Arrow'
slug: /chdb/guides/apache-arrow
description: 'In this guide, we will learn how to query Apache Arrow tables with chDB'
keywords: ['chdb', 'Apache Arrow']
doc_type: 'guide'
Apache Arrow
is a standardized column-oriented memory format that's gained popularity in the data community.
In this guide, we will learn how to query Apache Arrow using the
Python
table function.
Setup {#setup}
Let's first create a virtual environment:
bash
python -m venv .venv
source .venv/bin/activate
And now we'll install chDB.
Make sure you have version 2.0.2 or higher:
bash
pip install "chdb>=2.0.2"
And now we're going to install PyArrow, pandas, and ipython:
bash
pip install pyarrow pandas ipython
We're going to use
ipython
to run the commands in the rest of the guide, which you can launch by running:
bash
ipython
You can also use the code in a Python script or in your favorite notebook.
Creating an Apache Arrow table from a file {#creating-an-apache-arrow-table-from-a-file}
Let's first download one of the Parquet files for the
Ookla dataset
, using the
AWS CLI tool
:
bash
aws s3 cp \
--no-sign \
s3://ookla-open-data/parquet/performance/type=mobile/year=2023/quarter=2/2023-04-01_performance_mobile_tiles.parquet .
:::note
If you want to download more files, use
aws s3 ls
to get a list of all the files and then update the above command.
:::
Next, we'll import the Parquet module from the
pyarrow
package:
python
import pyarrow.parquet as pq
And then we can read the Parquet file into an Apache Arrow table:
python
arrow_table = pq.read_table("./2023-04-01_performance_mobile_tiles.parquet")
The schema is shown below:
python
arrow_table.schema
text
quadkey: string
tile: string
tile_x: double
tile_y: double
avg_d_kbps: int64
avg_u_kbps: int64
avg_lat_ms: int64
avg_lat_down_ms: int32
avg_lat_up_ms: int32
tests: int64
devices: int64
And we can get the row and column count by calling the
shape
attribute:
python
arrow_table.shape
text
(3864546, 11)
Querying Apache Arrow {#querying-apache-arrow}
Now let's query the Arrow table from chDB.
First, let's import chDB:
python
import chdb
And then we can describe the table:
python
chdb.query("""
DESCRIBE Python(arrow_table)
SETTINGS describe_compact_output=1
""", "DataFrame")
text
name type
0 quadkey String
1 tile String
2 tile_x Float64
3 tile_y Float64
4 avg_d_kbps Int64
5 avg_u_kbps Int64
6 avg_lat_ms Int64
7 avg_lat_down_ms Int32
8 avg_lat_up_ms Int32
9 tests Int64
10 devices Int64
We can also count the number of rows:
python
chdb.query("SELECT count() FROM Python(arrow_table)", "DataFrame")
text
count()
0 3864546 | {"source_file": "querying-apache-arrow.md"} | [
0.0832548588514328,
-0.07200320065021515,
-0.05227114260196686,
0.019993916153907776,
-0.06485743820667267,
0.016058171167969704,
-0.012423458509147167,
0.044419534504413605,
-0.044537730515003204,
-0.03875249624252319,
0.017418211326003075,
0.0029513395857065916,
0.03558126837015152,
-0.0... |
c22b9dd6-eb74-44f3-8998-b57434add66b | We can also count the number of rows:
python
chdb.query("SELECT count() FROM Python(arrow_table)", "DataFrame")
text
count()
0 3864546
Now, let's do something a bit more interesting.
The following query excludes the
quadkey
and
tile.*
columns and then computes the average and max values for all remaining column:
python
chdb.query("""
WITH numericColumns AS (
SELECT * EXCEPT ('tile.*') EXCEPT(quadkey)
FROM Python(arrow_table)
)
SELECT * APPLY(max), * APPLY(avg) APPLY(x -> round(x, 2))
FROM numericColumns
""", "Vertical")
text
Row 1:
──────
max(avg_d_kbps): 4155282
max(avg_u_kbps): 1036628
max(avg_lat_ms): 2911
max(avg_lat_down_ms): 2146959360
max(avg_lat_up_ms): 2146959360
max(tests): 111266
max(devices): 1226
round(avg(avg_d_kbps), 2): 84393.52
round(avg(avg_u_kbps), 2): 15540.4
round(avg(avg_lat_ms), 2): 41.25
round(avg(avg_lat_down_ms), 2): 554355225.76
round(avg(avg_lat_up_ms), 2): 552843178.3
round(avg(tests), 2): 6.31
round(avg(devices), 2): 2.88 | {"source_file": "querying-apache-arrow.md"} | [
0.09497395157814026,
-0.029699966311454773,
-0.02526494488120079,
0.0062810759991407394,
-0.12325143814086914,
-0.00915564689785242,
-0.0020765098743140697,
0.02518576569855213,
-0.014046368189156055,
0.016743609681725502,
-0.014465708285570145,
-0.030220016837120056,
0.05193522945046425,
... |
21180563-b4be-400c-98cf-80cab34cd339 | title: 'chDB Python API Reference'
sidebar_label: 'Python API'
slug: /chdb/api/python
description: 'Complete Python API reference for chDB'
keywords: ['chdb', 'embedded', 'clickhouse-lite', 'python', 'api', 'reference']
doc_type: 'reference'
Python API Reference
Core Query Functions {#core-query-functions}
chdb.query
{#chdb-query}
Execute SQL query using chDB engine.
This is the main query function that executes SQL statements using the embedded
ClickHouse engine. Supports various output formats and can work with in-memory
or file-based databases.
Syntax
python
chdb.query(sql, output_format='CSV', path='', udf_path='')
Parameters
| Parameter | Type | Default | Description |
|-----------------|-------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
sql
| str |
required
| SQL query string to execute |
|
output_format
| str |
"CSV"
| Output format for results. Supported formats:
•
"CSV"
- Comma-separated values
•
"JSON"
- JSON format
•
"Arrow"
- Apache Arrow format
•
"Parquet"
- Parquet format
•
"DataFrame"
- Pandas DataFrame
•
"ArrowTable"
- PyArrow Table
•
"Debug"
- Enable verbose logging |
|
path
| str |
""
| Database file path. Defaults to in-memory database.
Can be a file path or
":memory:"
for in-memory database |
|
udf_path
| str |
""
| Path to User-Defined Functions directory |
Returns
Returns the query result in the specified format: | {"source_file": "python.md"} | [
-0.0038617942482233047,
0.00048333447193726897,
-0.07077139616012573,
0.11936935037374496,
-0.07265810668468475,
-0.011174016632139683,
0.039111990481615067,
0.06216903775930405,
-0.031741078943014145,
-0.0399639755487442,
-0.023492125794291496,
-0.041821058839559555,
0.12854941189289093,
... |
339aae83-0237-4461-9745-5bee1212b3d3 | Returns
Returns the query result in the specified format:
| Return Type | Condition |
|--------------------|----------------------------------------------------------|
|
str
| For text formats like CSV, JSON |
|
pd.DataFrame
| When
output_format
is
"DataFrame"
or
"dataframe"
|
|
pa.Table
| When
output_format
is
"ArrowTable"
or
"arrowtable"
|
| chdb result object | For other formats |
Raises
| Exception | Condition |
|---------------|------------------------------------------------------------------|
|
ChdbError
| If the SQL query execution fails |
|
ImportError
| If required dependencies are missing for DataFrame/Arrow formats |
Examples
```pycon
Basic CSV query
result = chdb.query("SELECT 1, 'hello'")
print(result)
"1,hello"
```
```pycon
Query with DataFrame output
df = chdb.query("SELECT 1 as id, 'hello' as msg", "DataFrame")
print(df)
id msg
0 1 hello
```
```pycon
Query with file-based database
result = chdb.query("CREATE TABLE test (id INT) ENGINE = Memory", path="mydb.chdb")
```
```pycon
Query with UDF
result = chdb.query("SELECT my_udf('test')", udf_path="/path/to/udfs")
```
chdb.sql
{#chdb_sql}
Execute SQL query using chDB engine.
This is the main query function that executes SQL statements using the embedded
ClickHouse engine. Supports various output formats and can work with in-memory
or file-based databases.
Syntax
python
chdb.sql(sql, output_format='CSV', path='', udf_path='')
Parameters | {"source_file": "python.md"} | [
0.023298949003219604,
0.02496333234012127,
-0.05037294328212738,
0.03466063365340233,
-0.02834170311689377,
0.017715970054268837,
-0.04954056441783905,
0.061445966362953186,
-0.01940833404660225,
-0.004104088060557842,
0.048834845423698425,
-0.05210728198289871,
0.018658004701137543,
-0.08... |
71d65890-159d-4e48-bd65-f8cbc0f78eaa | Syntax
python
chdb.sql(sql, output_format='CSV', path='', udf_path='')
Parameters
| Parameter | Type | Default | Description |
|-----------------|-------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
sql
| str |
required
| SQL query string to execute |
|
output_format
| str |
"CSV"
| Output format for results. Supported formats:
•
"CSV"
- Comma-separated values
•
"JSON"
- JSON format
•
"Arrow"
- Apache Arrow format
•
"Parquet"
- Parquet format
•
"DataFrame"
- Pandas DataFrame
•
"ArrowTable"
- PyArrow Table
•
"Debug"
- Enable verbose logging |
|
path
| str |
""
| Database file path. Defaults to in-memory database.
Can be a file path or
":memory:"
for in-memory database |
|
udf_path
| str |
""
| Path to User-Defined Functions directory |
Returns
Returns the query result in the specified format:
| Return Type | Condition |
|--------------------|----------------------------------------------------------|
|
str
| For text formats like CSV, JSON |
|
pd.DataFrame
| When
output_format
is
"DataFrame"
or
"dataframe"
|
|
pa.Table
| When
output_format
is
"ArrowTable"
or
"arrowtable"
|
| chdb result object | For other formats |
Raises | {"source_file": "python.md"} | [
0.057798877358436584,
-0.004663397558033466,
-0.09482267498970032,
0.07437749952077866,
-0.12363297492265701,
0.022864587604999542,
0.06090930104255676,
0.045650236308574677,
-0.027123384177684784,
0.00042956997640430927,
0.0018548443913459778,
-0.11015938967466354,
0.04809054732322693,
-0... |
6eeb2499-f000-4c85-8fa1-c608e422ce2a | Raises
| Exception | Condition |
|---------------------------|------------------------------------------------------------------|
|
ChdbError
| If the SQL query execution fails |
|
ImportError
| If required dependencies are missing for DataFrame/Arrow formats |
Examples
```pycon
Basic CSV query
result = chdb.query("SELECT 1, 'hello'")
print(result)
"1,hello"
```
```pycon
Query with DataFrame output
df = chdb.query("SELECT 1 as id, 'hello' as msg", "DataFrame")
print(df)
id msg
0 1 hello
```
```pycon
Query with file-based database
result = chdb.query("CREATE TABLE test (id INT) ENGINE = Memory", path="mydb.chdb")
```
```pycon
Query with UDF
result = chdb.query("SELECT my_udf('test')", udf_path="/path/to/udfs")
```
chdb.to_arrowTable
{#chdb-state-sqlitelike-to_arrowtable}
Convert query result to PyArrow Table.
Converts a chDB query result to a PyArrow Table for efficient columnar data processing.
Returns an empty table if the result is empty.
Syntax
python
chdb.to_arrowTable(res)
Parameters
| Parameter | Description |
|--------------|-------------------------------------------------------|
|
res
| chDB query result object containing binary Arrow data |
Returns
| Return type | Description |
|-------------|--------------------------------------------|
|
pa.Table
| PyArrow Table containing the query results |
Raises
| Error type | Description |
|---------------|----------------------------------------|
|
ImportError
| If pyarrow or pandas are not installed |
Example
```pycon
result = chdb.query("SELECT 1 as id, 'hello' as msg", "Arrow")
table = chdb.to_arrowTable(result)
print(table.to_pandas())
id msg
0 1 hello
```
chdb.to_df
{#chdb_to_df}
Convert query result to pandas DataFrame.
Converts a chDB query result to a pandas DataFrame by first converting to
PyArrow Table and then to pandas using multi-threading for better performance.
Syntax
python
chdb.to_df(r)
Parameters
| Parameter | Description |
|------------|-------------------------------------------------------|
|
r
| chDB query result object containing binary Arrow data |
Returns
| Return Type | Description |
|-------------|-------------|
|
pd.DataFrame
| pandas DataFrame containing the query results |
Raises
| Exception | Condition |
|---------------|----------------------------------------|
|
ImportError
| If pyarrow or pandas are not installed |
Example
```pycon
result = chdb.query("SELECT 1 as id, 'hello' as msg", "Arrow")
df = chdb.to_df(result)
print(df)
id msg
0 1 hello
``` | {"source_file": "python.md"} | [
-0.0043043820187449455,
-0.05013507604598999,
-0.04677010700106621,
0.06914374232292175,
-0.016687652096152306,
-0.015982432290911674,
0.03872792422771454,
0.03876297175884247,
-0.042582038789987564,
-0.02597818709909916,
0.050132013857364655,
-0.0280960313975811,
0.03629251569509506,
-0.0... |
f934857b-e308-4361-bfeb-79b800f31a90 | Example
```pycon
result = chdb.query("SELECT 1 as id, 'hello' as msg", "Arrow")
df = chdb.to_df(result)
print(df)
id msg
0 1 hello
```
Connection and Session Management {#connection-session-management}
The following Session Functions are available:
chdb.connect
{#chdb-connect}
Create a connection to chDB background server.
This function establishes a
Connection
to the chDB (ClickHouse) database engine.
Only one open connection is allowed per process.
Multiple calls with the same connection string will return the same connection object.
python
chdb.connect(connection_string: str = ':memory:') → Connection
Parameters:
| Parameter | Type | Default | Description |
|---------------------|-------|--------------|------------------------------------------------|
|
connection_string
| str |
":memory:"
| Database connection string. See formats below. |
Basic formats
| Format | Description |
|---------------------------|------------------------------|
|
":memory:"
| In-memory database (default) |
|
"test.db"
| Relative path database file |
|
"file:test.db"
| Same as relative path |
|
"/path/to/test.db"
| Absolute path database file |
|
"file:/path/to/test.db"
| Same as absolute path |
With query parameters
| Format | Description |
|----------------------------------------------------|---------------------------|
|
"file:test.db?param1=value1¶m2=value2"
| Relative path with params |
|
"file::memory:?verbose&log-level=test"
| In-memory with params |
|
"///path/to/test.db?param1=value1¶m2=value2"
| Absolute path with params |
Query parameter handling
Query parameters are passed to ClickHouse engine as startup arguments.
Special parameter handling:
| Special Parameter | Becomes | Description |
|--------------------|----------------|-------------------------|
|
mode=ro
|
--readonly=1
| Read-only mode |
|
verbose
| (flag) | Enables verbose logging |
|
log-level=test
| (setting) | Sets logging level |
For a complete parameter list, see
clickhouse local --help --verbose
Returns | {"source_file": "python.md"} | [
0.022534826770424843,
-0.0318622961640358,
-0.1332831233739853,
0.06740638613700867,
-0.13711918890476227,
-0.026114249601960182,
0.08637168258428574,
0.019177500158548355,
0.03329736739397049,
-0.020907774567604065,
-0.04484748840332031,
0.014650476165115833,
0.04109451547265053,
-0.04671... |
f4438a08-a7e8-4fe3-9e7b-890e8c48c2be | For a complete parameter list, see
clickhouse local --help --verbose
Returns
| Return Type | Description |
|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Connection
| Database connection object that supports:
• Creating cursors with
Connection.cursor()
• Direct queries with
Connection.query()
• Streaming queries with
Connection.send_query()
• Context manager protocol for automatic cleanup |
Raises
| Exception | Condition |
|----------------|---------------------------------|
|
RuntimeError
| If connection to database fails |
:::warning
Only one connection per process is supported.
Creating a new connection will close any existing connection.
:::
Examples
```pycon
In-memory database
conn = connect()
conn = connect(":memory:")
File-based database
conn = connect("my_data.db")
conn = connect("/path/to/data.db")
With parameters
conn = connect("data.db?mode=ro") # Read-only mode
conn = connect(":memory:?verbose&log-level=debug") # Debug logging
Using context manager for automatic cleanup
with connect("data.db") as conn:
... result = conn.query("SELECT 1")
... print(result)
Connection automatically closed
```
See also
-
Connection
- Database connection class
-
Cursor
- Database cursor for DB-API 2.0 operations
Exception Handling {#chdb-exceptions}
class
chdb.ChdbError
{#chdb_chdbError}
Bases:
Exception
Base exception class for chDB-related errors.
This exception is raised when chDB query execution fails or encounters
an error. It inherits from the standard Python Exception class and
provides error information from the underlying ClickHouse engine.
class
chdb.session.Session
{#chdb_session_session}
Bases:
object
Session will keep the state of query.
If path is None, it will create a temporary directory and use it as the database path
and the temporary directory will be removed when the session is closed.
You can also pass in a path to create a database at that path where will keep your data.
You can also use a connection string to pass in the path and other parameters.
python
class chdb.session.Session(path=None)
Examples | {"source_file": "python.md"} | [
0.028133349493145943,
0.019566388800740242,
-0.029111497104167938,
0.04712020605802536,
-0.06387662142515182,
0.011585182510316372,
0.02865258976817131,
0.004473717417567968,
-0.07875914871692657,
-0.03412751108407974,
0.04615044593811035,
-0.05778631567955017,
-0.027694720774888992,
-0.05... |
97c3e352-50b1-4fae-af82-051d511cdcac | You can also use a connection string to pass in the path and other parameters.
python
class chdb.session.Session(path=None)
Examples
| Connection String | Description |
|----------------------------------------------------|--------------------------------------|
|
":memory:"
| In-memory database |
|
"test.db"
| Relative path |
|
"file:test.db"
| Same as above |
|
"/path/to/test.db"
| Absolute path |
|
"file:/path/to/test.db"
| Same as above |
|
"file:test.db?param1=value1¶m2=value2"
| Relative path with query params |
|
"file::memory:?verbose&log-level=test"
| In-memory database with query params |
|
"///path/to/test.db?param1=value1¶m2=value2"
| Absolute path with query params |
:::note Connection string args handling
Connection strings containing query params like “
file:test.db?param1=value1¶m2=value2
”
“param1=value1” will be passed to ClickHouse engine as start up args.
For more details, see
clickhouse local –help –verbose
Some special args handling:
- “mode=ro” would be “–readonly=1” for clickhouse (read-only mode)
:::
:::warning Important
- There can be only one session at a time. If you want to create a new session, you need to close the existing one.
- Creating a new session will close the existing one.
:::
cleanup
{#cleanup}
Cleanup session resources with exception handling.
This method attempts to close the session while suppressing any exceptions
that might occur during the cleanup process. It’s particularly useful in
error handling scenarios or when you need to ensure cleanup happens regardless
of the session state.
Syntax
python
cleanup()
:::note
This method will never raise an exception, making it safe to call in
finally blocks or destructors.
:::
Examples
```pycon
session = Session("test.db")
try:
... session.query("INVALID SQL")
... finally:
... session.cleanup() # Safe cleanup regardless of errors
```
See also
-
close()
- For explicit session closing with error propagation
close
{#close}
Close the session and cleanup resources.
This method closes the underlying connection and resets the global session state.
After calling this method, the session becomes invalid and cannot be used for
further queries.
Syntax
python
close()
:::note
This method is automatically called when the session is used as a context manager
or when the session object is destroyed.
:::
:::warning Important
Any attempt to use the session after calling
close()
will result in an error.
:::
Examples
```pycon | {"source_file": "python.md"} | [
0.04223009571433067,
0.017484210431575775,
-0.15392759442329407,
0.09472623467445374,
-0.12516549229621887,
0.015158343128859997,
0.08934223651885986,
0.1206602230668068,
-0.032932180911302567,
-0.012448911555111408,
-0.03268643841147423,
-0.026334689930081367,
0.09003263711929321,
0.02301... |
7bbefec7-0fe5-445d-83c4-f44144b40c31 | :::warning Important
Any attempt to use the session after calling
close()
will result in an error.
:::
Examples
```pycon
session = Session("test.db")
session.query("SELECT 1")
session.close() # Explicitly close the session
```
query
{#chdb-session-session-query}
Execute a SQL query and return the results.
This method executes a SQL query against the session’s database and returns
the results in the specified format. The method supports various output formats
and maintains session state between queries.
Syntax
python
query(sql, fmt='CSV', udf_path='')
Parameters
| Parameter | Type | Default | Description |
|------------|-------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
sql
| str |
required
| SQL query string to execute |
|
fmt
| str |
"CSV"
| Output format for results. Available formats:
•
"CSV"
- Comma-separated values
•
"JSON"
- JSON format
•
"TabSeparated"
- Tab-separated values
•
"Pretty"
- Pretty-printed table format
•
"JSONCompact"
- Compact JSON format
•
"Arrow"
- Apache Arrow format
•
"Parquet"
- Parquet format |
|
udf_path
| str |
""
| Path to user-defined functions. If not specified, uses the UDF path from session initialization |
Returns
Returns query results in the specified format.
The exact return type depends on the format parameter:
- String formats (CSV, JSON, etc.) return str
- Binary formats (Arrow, Parquet) return bytes
Raises
| Exception | Condition |
|----------------|-------------------------------------|
|
RuntimeError
| If the session is closed or invalid |
|
ValueError
| If the SQL query is malformed |
:::note
The “Debug” format is not supported and will be automatically converted
to “CSV” with a warning.
For debugging, use connection string parameters instead.
::: | {"source_file": "python.md"} | [
0.021201351657509804,
0.06781961023807526,
-0.12231804430484772,
0.07197071611881256,
-0.05182569846510887,
-0.02247900515794754,
0.07969103008508682,
0.06936174631118774,
0.005773323122411966,
0.006946536246687174,
-0.035238116979599,
-0.05678754672408104,
0.0034147147089242935,
-0.034192... |
d094e312-b1ee-4e2d-9501-f007d98d931d | :::note
The “Debug” format is not supported and will be automatically converted
to “CSV” with a warning.
For debugging, use connection string parameters instead.
:::
:::warning Warning
This method executes the query synchronously and loads all results into
memory. For large result sets, consider using
send_query()
for
streaming results.
:::
Examples
```pycon
session = Session("test.db")
Basic query with default CSV format
result = session.query("SELECT 1 as number")
print(result)
number
1
```
```pycon
Query with JSON format
result = session.query("SELECT 1 as number", fmt="JSON")
print(result)
{"number": "1"}
```
```pycon
Complex query with table creation
session.query("CREATE TABLE test (id INT, name String) ENGINE = Memory")
session.query("INSERT INTO test VALUES (1, 'Alice'), (2, 'Bob')")
result = session.query("SELECT * FROM test ORDER BY id")
print(result)
id,name
1,Alice
2,Bob
```
See also
-
send_query()
- For streaming query execution
-
sql
- Alias for this method
send_query
{#chdb-session-session-send_query}
Execute a SQL query and return a streaming result iterator.
This method executes a SQL query against the session’s database and returns
a streaming result object that allows you to iterate over the results without
loading everything into memory at once. This is particularly useful for large
result sets.
Syntax
python
send_query(sql, fmt='CSV') → StreamingResult
Parameters
| Parameter | Type | Default | Description |
|------------|-------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
sql
| str |
required
| SQL query string to execute |
|
fmt
| str |
"CSV"
| Output format for results. Available formats:
•
"CSV"
- Comma-separated values
•
"JSON"
- JSON format
•
"TabSeparated"
- Tab-separated values
•
"JSONCompact"
- Compact JSON format
•
"Arrow"
- Apache Arrow format
•
"Parquet"
- Parquet format |
Returns | {"source_file": "python.md"} | [
0.005716750863939524,
0.06357013434171677,
-0.10366103798151016,
0.0884861871600151,
-0.08324684202671051,
-0.0384695827960968,
0.0405622161924839,
0.08591091632843018,
0.003542459337040782,
0.025333333760499954,
0.030548477545380592,
0.014119118452072144,
0.025503158569335938,
-0.01776885... |
d58d7f91-a1c9-4089-9237-c28b96fd7ad0 | Returns
| Return Type | Description |
|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|
|
StreamingResult
| A streaming result iterator that yields query results incrementally. The iterator can be used in for loops or converted to other data structures |
Raises
| Exception | Condition |
|----------------|-------------------------------------|
|
RuntimeError
| If the session is closed or invalid |
|
ValueError
| If the SQL query is malformed |
:::note
The “Debug” format is not supported and will be automatically converted
to “CSV” with a warning. For debugging, use connection string parameters instead.
:::
:::warning
The returned StreamingResult object should be consumed promptly or stored appropriately, as it maintains a connection to the database.
:::
Examples
```pycon
session = Session("test.db")
session.query("CREATE TABLE big_table (id INT, data String) ENGINE = MergeTree() order by id")
Insert large dataset
for i in range(1000):
... session.query(f"INSERT INTO big_table VALUES ({i}, 'data_{i}')")
Stream results to avoid memory issues
streaming_result = session.send_query("SELECT * FROM big_table ORDER BY id")
for chunk in streaming_result:
... print(f"Processing chunk: {len(chunk)} bytes")
... # Process chunk without loading entire result set
```
```pycon
Using with context manager
with session.send_query("SELECT COUNT(*) FROM big_table") as stream:
... for result in stream:
... print(f"Count result: {result}")
```
See also
-
query()
- For non-streaming query execution
-
chdb.state.sqlitelike.StreamingResult
- Streaming result iterator
sql
{#chdb-session-session-sql}
Execute a SQL query and return the results.
This method executes a SQL query against the session’s database and returns
the results in the specified format. The method supports various output formats
and maintains session state between queries.
Syntax
python
sql(sql, fmt='CSV', udf_path='')
Parameters | {"source_file": "python.md"} | [
-0.05113331601023674,
0.04828786849975586,
-0.01577206514775753,
0.025300191715359688,
-0.026352373883128166,
0.006301863584667444,
0.020808832719922066,
0.048491790890693665,
-0.007836895063519478,
-0.026433678343892097,
0.036558862775564194,
-0.07093562185764313,
0.05092862248420715,
-0.... |
e93e271f-8e8c-4161-a2e6-42e09a62799f | Syntax
python
sql(sql, fmt='CSV', udf_path='')
Parameters
| Parameter | Type | Default | Description |
|------------|-------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
sql
| str |
required
| SQL query string to execute |
|
fmt
| str |
"CSV"
| Output format for results. Available formats:
•
"CSV"
- Comma-separated values
•
"JSON"
- JSON format
•
"TabSeparated"
- Tab-separated values
•
"Pretty"
- Pretty-printed table format
•
"JSONCompact"
- Compact JSON format
•
"Arrow"
- Apache Arrow format
•
"Parquet"
- Parquet format |
|
udf_path
| str |
""
| Path to user-defined functions. If not specified, uses the UDF path from session initialization |
Returns
Returns query results in the specified format.
The exact return type depends on the format parameter:
- String formats (CSV, JSON, etc.) return str
- Binary formats (Arrow, Parquet) return bytes
Raises:
| Exception | Condition |
|----------------|-------------------------------------|
|
RuntimeError
| If the session is closed or invalid |
|
ValueError
| If the SQL query is malformed |
:::note
The “Debug” format is not supported and will be automatically converted
to “CSV” with a warning. For debugging, use connection string parameters
instead.
:::
:::warning Warning
This method executes the query synchronously and loads all results into
memory.
For large result sets, consider using
send_query()
for streaming results.
:::
Examples
```pycon
session = Session("test.db")
Basic query with default CSV format
result = session.query("SELECT 1 as number")
print(result)
number
1
```
```pycon
Query with JSON format
result = session.query("SELECT 1 as number", fmt="JSON")
print(result)
{"number": "1"}
```
```pycon
Complex query with table creation | {"source_file": "python.md"} | [
0.0483112670481205,
0.005836611147969961,
-0.0685579851269722,
0.06617556512355804,
-0.10558903962373734,
-0.025437230244278908,
0.08018951863050461,
0.050658658146858215,
-0.030858565121889114,
0.013276373036205769,
0.03154696524143219,
-0.0789140909910202,
0.040355533361434937,
-0.083075... |
7c1f4fdf-7388-4dff-b6fb-5cace6b041ae | ```pycon
Query with JSON format
result = session.query("SELECT 1 as number", fmt="JSON")
print(result)
{"number": "1"}
```
```pycon
Complex query with table creation
session.query("CREATE TABLE test (id INT, name String) ENGINE = MergeTree() order by id")
session.query("INSERT INTO test VALUES (1, 'Alice'), (2, 'Bob')")
result = session.query("SELECT * FROM test ORDER BY id")
print(result)
id,name
1,Alice
2,Bob
```
See also
-
send_query()
- For streaming query execution
-
sql
- Alias for this method
State Management {#chdb-state-management}
chdb.state.connect
{#chdb_state_connect}
Create a
Connection
to the chDB background server.
This function establishes a connection to the chDB (ClickHouse) database engine.
Only one open connection is allowed per process. Multiple calls with the same
connection string will return the same connection object.
Syntax
python
chdb.state.connect(connection_string: str = ':memory:') → Connection
Parameters
| Parameter | Type | Default | Description |
|------------------------------------|-------|--------------|------------------------------------------------|
|
connection_string(str, optional)
| str |
":memory:"
| Database connection string. See formats below. |
Basic formats
Supported connection string formats:
| Format | Description |
|---------------------------|------------------------------|
|
":memory:"
| In-memory database (default) |
|
"test.db"
| Relative path database file |
|
"file:test.db"
| Same as relative path |
|
"/path/to/test.db"
| Absolute path database file |
|
"file:/path/to/test.db"
| Same as absolute path |
With query parameters
| Format | Description |
|----------------------------------------------------|---------------------------|
|
"file:test.db?param1=value1¶m2=value2"
| Relative path with params |
|
"file::memory:?verbose&log-level=test"
| In-memory with params |
|
"///path/to/test.db?param1=value1¶m2=value2"
| Absolute path with params |
Query parameter handling
Query parameters are passed to ClickHouse engine as startup arguments.
Special parameter handling:
| Special Parameter | Becomes | Description |
|--------------------|----------------|-------------------------|
|
mode=ro
|
--readonly=1
| Read-only mode |
|
verbose
| (flag) | Enables verbose logging |
|
log-level=test
| (setting) | Sets logging level |
For a complete parameter list, see
clickhouse local --help --verbose
Returns | {"source_file": "python.md"} | [
-0.012311100959777832,
0.009400496259331703,
-0.09208209067583084,
0.051725056022405624,
-0.1323394775390625,
-0.021554937586188316,
0.06762654334306717,
0.05581295117735863,
0.006325758062303066,
0.015758195891976357,
0.013099764473736286,
0.04241985082626343,
0.06375376135110855,
-0.0015... |
bb236b4d-7dc7-4347-b691-ee374f1ffe18 | For a complete parameter list, see
clickhouse local --help --verbose
Returns
| Return Type | Description |
|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Connection
| Database connection object that supports:
• Creating cursors with
Connection.cursor()
• Direct queries with
Connection.query()
• Streaming queries with
Connection.send_query()
• Context manager protocol for automatic cleanup |
Raises
| Exception | Condition |
|----------------|---------------------------------|
|
RuntimeError
| If connection to database fails |
:::warning Warning
Only one connection per process is supported.
Creating a new connection will close any existing connection.
:::
Examples
```pycon
In-memory database
conn = connect()
conn = connect(":memory:")
File-based database
conn = connect("my_data.db")
conn = connect("/path/to/data.db")
With parameters
conn = connect("data.db?mode=ro") # Read-only mode
conn = connect(":memory:?verbose&log-level=debug") # Debug logging
Using context manager for automatic cleanup
with connect("data.db") as conn:
... result = conn.query("SELECT 1")
... print(result)
Connection automatically closed
```
See also
-
Connection
- Database connection class
-
Cursor
- Database cursor for DB-API 2.0 operations
class
chdb.state.sqlitelike.Connection
{#chdb-state-sqlitelike-connection}
Bases:
object
Syntax
python
class chdb.state.sqlitelike.Connection(connection_string: str)
close
{#chdb-session-session-close}
Close the connection and cleanup resources.
This method closes the database connection and cleans up any associated
resources including active cursors. After calling this method, the
connection becomes invalid and cannot be used for further operations.
Syntax
python
close() → None
:::note
This method is idempotent - calling it multiple times is safe.
:::
:::warning Warning
Any ongoing streaming queries will be cancelled when the connection
is closed. Ensure all important data is processed before closing.
:::
Examples
```pycon
conn = connect("test.db")
Use connection for queries
conn.query("CREATE TABLE test (id INT) ENGINE = Memory")
Close when done
conn.close()
```
```pycon
Using with context manager (automatic cleanup)
with connect("test.db") as conn:
... conn.query("SELECT 1")
... # Connection automatically closed
```
cursor
{#chdb-state-sqlitelike-connection-cursor} | {"source_file": "python.md"} | [
0.028133349493145943,
0.019566388800740242,
-0.029111497104167938,
0.04712020605802536,
-0.06387662142515182,
0.011585182510316372,
0.02865258976817131,
0.004473717417567968,
-0.07875914871692657,
-0.03412751108407974,
0.04615044593811035,
-0.05778631567955017,
-0.027694720774888992,
-0.05... |
88f58a26-0962-4847-8c4f-2759e7ef721d | with connect("test.db") as conn:
... conn.query("SELECT 1")
... # Connection automatically closed
```
cursor
{#chdb-state-sqlitelike-connection-cursor}
Create a
Cursor
object for executing queries.
This method creates a database cursor that provides the standard
DB-API 2.0 interface for executing queries and fetching results.
The cursor allows for fine-grained control over query execution
and result retrieval.
Syntax
python
cursor() → Cursor
Returns
| Return Type | Description |
|--------------|-----------------------------------------|
|
Cursor
| A cursor object for database operations |
:::note
Creating a new cursor will replace any existing cursor associated
with this connection. Only one cursor per connection is supported.
:::
Examples
```pycon
conn = connect(":memory:")
cursor = conn.cursor()
cursor.execute("CREATE TABLE test (id INT, name String) ENGINE = Memory")
cursor.execute("INSERT INTO test VALUES (1, 'Alice')")
cursor.execute("SELECT * FROM test")
rows = cursor.fetchall()
print(rows)
((1, 'Alice'),)
```
See also
-
Cursor
- Database cursor implementation
query
{#chdb-state-sqlitelike-connection-query}
Execute a SQL query and return the complete results.
This method executes a SQL query synchronously and returns the complete
result set. It supports various output formats and automatically applies
format-specific post-processing.
Syntax
python
query(query: str, format: str = 'CSV') → Any
Parameters:
| Parameter | Type | Default | Description |
|------------|-------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
query
| str |
required
| SQL query string to execute |
|
format
| str |
"CSV"
| Output format for results. Supported formats:
•
"CSV"
- Comma-separated values (string)
•
"JSON"
- JSON format (string)
•
"Arrow"
- Apache Arrow format (bytes)
•
"Dataframe"
- Pandas DataFrame (requires pandas)
•
"Arrowtable"
- PyArrow Table (requires pyarrow) |
Returns | {"source_file": "python.md"} | [
-0.04710414260625839,
-0.012430135160684586,
-0.06436832994222641,
0.08156199753284454,
-0.12137583643198013,
-0.03931552916765213,
0.057223010808229446,
0.023931700736284256,
-0.018645988777279854,
0.023056475445628166,
-0.005409883335232735,
-0.004265427589416504,
0.0408082939684391,
-0.... |
dc1342a6-a8bb-47fa-9859-7bbd3569d8e8 | Returns
| Return Type | Description |
|--------------------|--------------------------------|
|
str
| For string formats (CSV, JSON) |
|
bytes
| For Arrow format |
|
pandas.DataFrame
| For dataframe format |
|
pyarrow.Table
| For arrowtable format |
Raises
| Exception | Condition |
|----------------|---------------------------------------------------|
|
RuntimeError
| If query execution fails |
|
ImportError
| If required packages for format are not installed |
:::warning Warning
This method loads the entire result set into memory. For large
results, consider using
send_query()
for streaming.
:::
Examples
```pycon
conn = connect(":memory:")
Basic CSV query
result = conn.query("SELECT 1 as num, 'hello' as text")
print(result)
num,text
1,hello
```
```pycon
DataFrame format
df = conn.query("SELECT number FROM numbers(5)", "dataframe")
print(df)
number
0 0
1 1
2 2
3 3
4 4
```
See also
-
send_query()
- For streaming query execution
send_query
{#chdb-state-sqlitelike-connection-send_query}
Execute a SQL query and return a streaming result iterator.
This method executes a SQL query and returns a StreamingResult object
that allows you to iterate over the results without loading everything
into memory at once. This is ideal for processing large result sets.
Syntax
python
send_query(query: str, format: str = 'CSV') → StreamingResult
Parameters
| Parameter | Type | Default | Description |
|------------|-------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
query
| str |
required
| SQL query string to execute |
|
format
| str |
"CSV"
| Output format for results. Supported formats:
•
"CSV"
- Comma-separated values
•
"JSON"
- JSON format
•
"Arrow"
- Apache Arrow format (enables record_batch() method)
•
"dataframe"
- Pandas DataFrame chunks
•
"arrowtable"
- PyArrow Table chunks |
Returns | {"source_file": "python.md"} | [
0.061170537024736404,
0.005418353248387575,
-0.1054045706987381,
0.05231846496462822,
-0.0023406336549669504,
-0.04594524949789047,
0.012825749814510345,
0.05660735443234444,
-0.02888563647866249,
-0.02370479144155979,
0.030923081561923027,
0.02826506458222866,
-0.006630327086895704,
-0.01... |
6b45f83a-b44b-4962-889a-5406166d2ced | Returns
| Return Type | Description |
|-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
StreamingResult
| A streaming iterator for query results that supports:
• Iterator protocol (for loops)
• Context manager protocol (with statements)
• Manual fetching with fetch() method
• PyArrow RecordBatch streaming (Arrow format only) |
Raises
| Exception | Condition |
|----------------|---------------------------------------------------|
|
RuntimeError
| If query execution fails |
|
ImportError
| If required packages for format are not installed |
:::note
Only the “Arrow” format supports the
record_batch()
method on the returned StreamingResult.
:::
Examples
```pycon
conn = connect(":memory:")
Basic streaming
stream = conn.send_query("SELECT number FROM numbers(1000)")
for chunk in stream:
... print(f"Processing chunk: {len(chunk)} bytes")
```
```pycon
Using context manager for cleanup
with conn.send_query("SELECT * FROM large_table") as stream:
... chunk = stream.fetch()
... while chunk:
... process_data(chunk)
... chunk = stream.fetch()
```
```pycon
Arrow format with RecordBatch streaming
stream = conn.send_query("SELECT * FROM data", "Arrow")
reader = stream.record_batch(rows_per_batch=10000)
for batch in reader:
... print(f"Batch shape: {batch.num_rows} x {batch.num_columns}")
```
See also
-
query()
- For non-streaming query execution
-
StreamingResult
- Streaming result iterator
class
chdb.state.sqlitelike.Cursor
{#chdb-state-sqlitelike-cursor}
Bases:
object
python
class chdb.state.sqlitelike.Cursor(connection)
close
{#cursor-close-none}
Close the cursor and cleanup resources.
This method closes the cursor and cleans up any associated resources.
After calling this method, the cursor becomes invalid and cannot be
used for further operations.
Syntax
python
close() → None
:::note
This method is idempotent - calling it multiple times is safe.
The cursor is also automatically closed when the connection is closed.
:::
Examples
```pycon
cursor = conn.cursor()
cursor.execute("SELECT 1")
result = cursor.fetchone()
cursor.close() # Cleanup cursor resources
```
column_names
{#chdb-state-sqlitelike-cursor-column_names}
Return a list of column names from the last executed query. | {"source_file": "python.md"} | [
-0.0052984775975346565,
0.08059202134609222,
0.03870628401637077,
0.03293995186686516,
-0.07898212224245071,
0.03403903543949127,
0.04770667850971222,
0.014872120693325996,
0.037789005786180496,
-0.05088052526116371,
0.021917130798101425,
-0.06212566792964935,
-0.013598302379250526,
-0.052... |
8aeb14a2-bf2e-4d96-8ff6-29ec79a3ee6a | column_names
{#chdb-state-sqlitelike-cursor-column_names}
Return a list of column names from the last executed query.
This method returns the column names from the most recently executed
SELECT query. The names are returned in the same order as they appear
in the result set.
Syntax
python
column_names() → list
Returns
| Return Type | Description |
|--------------|-----------------------------------------------------------------------------------------------------------|
|
list
| List of column name strings, or empty list if no query has been executed or the query returned no columns |
Examples
```pycon
cursor = conn.cursor()
cursor.execute("SELECT id, name, email FROM users LIMIT 1")
print(cursor.column_names())
['id', 'name', 'email']
```
See also
-
column_types()
- Get column type information
-
description
- DB-API 2.0 column description
column_types
{#chdb-state-sqlitelike-cursor-column_types}
Return a list of column types from the last executed query.
This method returns the ClickHouse column type names from the most
recently executed SELECT query. The types are returned in the same
order as they appear in the result set.
Syntax
python
column_types() → list
Returns
| Return Type | Description |
|-------------|-------------|
|
list
| List of ClickHouse type name strings, or empty list if no query has been executed or the query returned no columns |
Examples
```pycon
cursor = conn.cursor()
cursor.execute("SELECT toInt32(1), toString('hello')")
print(cursor.column_types())
['Int32', 'String']
```
See also
-
column_names()
- Get column name information
-
description
- DB-API 2.0 column description
commit
{#commit}
Commit any pending transaction.
This method commits any pending database transaction. In ClickHouse,
most operations are auto-committed, but this method is provided for
DB-API 2.0 compatibility.
:::note
ClickHouse typically auto-commits operations, so explicit commits
are usually not necessary. This method is provided for compatibility
with standard DB-API 2.0 workflow.
:::
Syntax
python
commit() → None
Examples
```pycon
cursor = conn.cursor()
cursor.execute("INSERT INTO test VALUES (1, 'data')")
cursor.commit()
```
property description : list
{#chdb-state-sqlitelike-cursor-description}
Return column description as per DB-API 2.0 specification.
This property returns a list of 7-item tuples describing each column
in the result set of the last executed SELECT query. Each tuple contains:
(name, type_code, display_size, internal_size, precision, scale, null_ok)
Currently, only name and type_code are provided, with other fields set to None.
Returns | {"source_file": "python.md"} | [
0.01648537442088127,
0.047289229929447174,
-0.0602538138628006,
0.04641889035701752,
-0.026687683537602425,
-0.04558838531374931,
0.04835846647620201,
0.03548196703195572,
0.014583117328584194,
0.0017456625355407596,
0.059229329228401184,
-0.00707605853676796,
-0.019057869911193848,
-0.125... |
f9c5a112-5511-4208-b132-4bf5ecf458a3 | Currently, only name and type_code are provided, with other fields set to None.
Returns
| Return Type | Description |
|-------------|-------------|
|
list
| List of 7-tuples describing each column, or empty list if no SELECT query has been executed |
:::note
This follows the DB-API 2.0 specification for cursor.description.
Only the first two elements (name and type_code) contain meaningful
data in this implementation.
:::
Examples
```pycon
cursor = conn.cursor()
cursor.execute("SELECT id, name FROM users LIMIT 1")
for desc in cursor.description:
... print(f"Column: {desc[0]}, Type: {desc[1]}")
Column: id, Type: Int32
Column: name, Type: String
```
See also
-
column_names()
- Get just column names
-
column_types()
- Get just column types
execute
{#execute}
Execute a SQL query and prepare results for fetching.
This method executes a SQL query and prepares the results for retrieval
using the fetch methods. It handles the parsing of result data and
automatic type conversion for ClickHouse data types.
Syntax
python
execute(query: str) → None
Parameters:
| Parameter | Type | Description |
|------------|-------|-----------------------------|
|
query
| str | SQL query string to execute |
Raises
| Exception | Condition |
|-----------|-----------|
|
Exception
| If query execution fails or result parsing fails |
:::note
This method follows DB-API 2.0 specifications for
cursor.execute()
.
After execution, use
fetchone()
,
fetchmany()
, or
fetchall()
to
retrieve results.
:::
:::note
The method automatically converts ClickHouse data types to appropriate
Python types:
Int/UInt types → int
Float types → float
String/FixedString → str
DateTime → datetime.datetime
Date → datetime.date
Bool → bool
:::
Examples
```pycon
cursor = conn.cursor()
Execute DDL
cursor.execute("CREATE TABLE test (id INT, name String) ENGINE = Memory")
Execute DML
cursor.execute("INSERT INTO test VALUES (1, 'Alice')")
Execute SELECT and fetch results
cursor.execute("SELECT * FROM test")
rows = cursor.fetchall()
print(rows)
((1, 'Alice'),)
```
See also
-
fetchone()
- Fetch single row
-
fetchmany()
- Fetch multiple rows
-
fetchall()
- Fetch all remaining rows
fetchall
{#chdb-state-sqlitelike-cursor-fetchall}
Fetch all remaining rows from the query result.
This method retrieves all remaining rows from the current query result
set starting from the current cursor position. It returns a tuple of
row tuples with appropriate Python type conversion applied.
Syntax
python
fetchall() → tuple
Returns:
| Return Type | Description |
|-------------|-------------|
|
tuple
| Tuple containing all remaining row tuples from the result set. Returns empty tuple if no rows are available | | {"source_file": "python.md"} | [
-0.05698929354548454,
0.02629716321825981,
-0.07305163890123367,
0.02604202926158905,
-0.06004408746957779,
-0.07721473276615143,
0.0175807923078537,
0.007481585256755352,
-0.044411689043045044,
-0.015144904144108295,
0.01723591610789299,
-0.023131173104047775,
0.0003928901569452137,
-0.11... |
023c52ec-b2ff-4fb8-9db5-afa2813896d8 | Returns:
| Return Type | Description |
|-------------|-------------|
|
tuple
| Tuple containing all remaining row tuples from the result set. Returns empty tuple if no rows are available |
:::warning Warning
This method loads all remaining rows into memory at once. For large
result sets, consider using
fetchmany()
to process results
in batches.
:::
Examples
```pycon
cursor = conn.cursor()
cursor.execute("SELECT id, name FROM users")
all_users = cursor.fetchall()
for user_id, user_name in all_users:
... print(f"User {user_id}: {user_name}")
```
See also
-
fetchone()
- Fetch single row
-
fetchmany()
- Fetch multiple rows in batches
fetchmany
{#chdb-state-sqlitelike-cursor-fetchmany}
Fetch multiple rows from the query result.
This method retrieves up to ‘size’ rows from the current query result
set. It returns a tuple of row tuples, with each row containing column
values with appropriate Python type conversion.
Syntax
python
fetchmany(size: int = 1) → tuple
Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
|
size
| int |
1
| Maximum number of rows to fetch |
Returns
| Return Type | Description |
|-------------|-------------------------------------------------------------------------------------------------|
|
tuple
| Tuple containing up to 'size' row tuples. May contain fewer rows if the result set is exhausted |
:::note
This method follows DB-API 2.0 specifications. It will return fewer
than ‘size’ rows if the result set is exhausted.
:::
Examples
```pycon
cursor = conn.cursor()
cursor.execute("SELECT * FROM large_table")
Process results in batches
while True:
... batch = cursor.fetchmany(100) # Fetch 100 rows at a time
... if not batch:
... break
... process_batch(batch)
```
See also
-
fetchone()
- Fetch single row
-
fetchall()
- Fetch all remaining rows
fetchone
{#chdb-state-sqlitelike-cursor-fetchone}
Fetch the next row from the query result.
This method retrieves the next available row from the current query
result set. It returns a tuple containing the column values with
appropriate Python type conversion applied.
Syntax
python
fetchone() → tuple | None
Returns:
| Return Type | Description |
|-------------------|-----------------------------------------------------------------------------|
|
Optional[tuple]
| Next row as a tuple of column values, or None if no more rows are available |
:::note
This method follows DB-API 2.0 specifications. Column values are
automatically converted to appropriate Python types based on
ClickHouse column types.
:::
Examples
```pycon | {"source_file": "python.md"} | [
-0.0015061385929584503,
0.02688155695796013,
-0.040870290249586105,
0.019460536539554596,
-0.05967549607157707,
-0.04788140952587128,
0.031227516010403633,
0.031795646995306015,
-0.09535272419452667,
-0.012494541704654694,
0.034318991005420685,
0.010804621502757072,
0.04291476681828499,
-0... |
5e15ce10-8a17-4da9-9c72-978830b91b6f | :::note
This method follows DB-API 2.0 specifications. Column values are
automatically converted to appropriate Python types based on
ClickHouse column types.
:::
Examples
```pycon
cursor = conn.cursor()
cursor.execute("SELECT id, name FROM users")
row = cursor.fetchone()
while row is not None:
... user_id, user_name = row
... print(f"User {user_id}: {user_name}")
... row = cursor.fetchone()
```
See also
-
fetchmany()
- Fetch multiple rows
-
fetchall()
- Fetch all remaining rows
chdb.state.sqlitelike
{#state-sqlitelike-to_arrowtable}
Convert query result to PyArrow Table.
This function converts chdb query results to a PyArrow Table format,
which provides efficient columnar data access and interoperability
with other data processing libraries.
Syntax
python
chdb.state.sqlitelike.to_arrowTable(res)
Parameters:
| Parameter | Type | Description |
|------------|-------|------------------------------------------------------------|
|
res
| - | Query result object from chdb containing Arrow format data |
Returns
| Return Type | Description |
|-----------------|--------------------------------------------|
|
pyarrow.Table
| PyArrow Table containing the query results |
Raises
| Exception | Condition |
|---------------|-------------------------------------------------|
|
ImportError
| If pyarrow or pandas packages are not installed |
:::note
This function requires both pyarrow and pandas to be installed.
Install them with:
pip install pyarrow pandas
:::
:::warning Warning
Empty results return an empty PyArrow Table with no schema.
:::
Examples
```pycon
import chdb
result = chdb.query("SELECT 1 as num, 'hello' as text", "Arrow")
table = to_arrowTable(result)
print(table.schema)
num: int64
text: string
print(table.to_pandas())
num text
0 1 hello
```
chdb.state.sqlitelike.to_df
{#state-sqlitelike-to_df}
Convert query result to Pandas DataFrame.
This function converts chdb query results to a Pandas DataFrame format
by first converting to PyArrow Table and then to DataFrame. This provides
convenient data analysis capabilities with Pandas API.
Syntax
python
chdb.state.sqlitelike.to_df(r)
Parameters:
| Parameter | Type | Description |
|------------|-------|------------------------------------------------------------|
|
r
| - | Query result object from chdb containing Arrow format data |
Returns:
| Return Type | Description |
|--------------------|-------------------------------------------------------------------------------------|
|
pandas.DataFrame
| DataFrame containing the query results with appropriate column names and data types |
Raises | {"source_file": "python.md"} | [
-0.0004886544775217772,
-0.016168376430869102,
-0.06071095913648605,
0.029470784589648247,
-0.12493061274290085,
-0.018258025869727135,
0.0591963529586792,
0.03143322095274925,
-0.071678027510643,
-0.03188103437423706,
-0.005333802662789822,
-0.015325205400586128,
0.06946918368339539,
-0.0... |
2307aeba-303f-4c88-b51b-fa376ec57dcd | Raises
| Exception | Condition |
|---------------|-------------------------------------------------|
|
ImportError
| If pyarrow or pandas packages are not installed |
:::note
This function uses multi-threading for the Arrow to Pandas conversion
to improve performance on large datasets.
:::
See also
-
to_arrowTable()
- For PyArrow Table format conversion
Examples
```pycon
import chdb
result = chdb.query("SELECT 1 as num, 'hello' as text", "Arrow")
df = to_df(result)
print(df)
num text
0 1 hello
print(df.dtypes)
num int64
text object
dtype: object
```
DataFrame Integration {#dataframe-integration}
class
chdb.dataframe.Table
{#chdb-dataframe-table}
Bases:
python
class chdb.dataframe.Table(*args: Any, **kwargs: Any)
Database API (DBAPI) 2.0 Interface {#database-api-interface}
chDB provides a Python DB-API 2.0 compatible interface for database connectivity, allowing you to use chDB with tools and frameworks that expect standard database interfaces.
The chDB DB-API 2.0 interface includes:
Connections
: Database connection management with connection strings
Cursors
: Query execution and result retrieval
Type System
: DB-API 2.0 compliant type constants and converters
Error Handling
: Standard database exception hierarchy
Thread Safety
: Level 1 thread safety (threads may share modules but not connections)
Core Functions {#core-functions}
The Database API (DBAPI) 2.0 Interface implements the following core functions:
chdb.dbapi.connect
{#dbapi-connect}
Initialize a new database connection.
Syntax
python
chdb.dbapi.connect(*args, **kwargs)
Parameters
| Parameter | Type | Default | Description |
|------------|-------|----------|-------------------------------------------------|
|
path
| str |
None
| Database file path. None for in-memory database |
Raises
| Exception | Condition |
|--------------------------------------|-------------------------------------|
|
err.Error
| If connection cannot be established |
chdb.dbapi.get_client_info()
{#dbapi-get-client-info}
Get client version information.
Returns the chDB client version as a string for MySQLdb compatibility.
Syntax
python
chdb.dbapi.get_client_info()
Returns
| Return Type | Description |
|--------------|----------------------------------------------|
|
str
| Version string in format 'major.minor.patch' |
Type constructors {#type-constructors}
chdb.dbapi.Binary(x)
{#dbapi-binary}
Return x as a binary type.
This function converts the input to bytes type for use with binary
database fields, following the DB-API 2.0 specification.
Syntax
python
chdb.dbapi.Binary(x)
Parameters | {"source_file": "python.md"} | [
0.04624912515282631,
-0.06721131503582001,
-0.054304298013448715,
0.0031457433942705393,
-0.03113471157848835,
-0.05583524703979492,
-0.016703762114048004,
0.019823655486106873,
-0.06534618139266968,
-0.022071365267038345,
0.04316224902868271,
0.03339936584234238,
-0.0009853907395154238,
-... |
64a67374-e020-4eeb-892e-5d108db732ba | This function converts the input to bytes type for use with binary
database fields, following the DB-API 2.0 specification.
Syntax
python
chdb.dbapi.Binary(x)
Parameters
| Parameter | Type | Description |
|------------|-------|---------------------------------|
|
x
| - | Input data to convert to binary |
Returns
| Return Type | Description |
|--------------|------------------------------|
|
bytes
| The input converted to bytes |
Connection Class {#connection-class}
class
chdb.dbapi.connections.Connection(path=None)
{#chdb-dbapi-connections-connection}
Bases:
object
DB-API 2.0 compliant connection to chDB database.
This class provides a standard DB-API interface for connecting to and interacting
with chDB databases. It supports both in-memory and file-based databases.
The connection manages the underlying chDB engine and provides methods for
executing queries, managing transactions (no-op for ClickHouse), and creating cursors.
python
class chdb.dbapi.connections.Connection(path=None)
Parameters
| Parameter | Type | Default | Description |
|------------|-------|----------|--------------------------------------------------------------------------------------------------------------------|
|
path
| str |
None
| Database file path. If None, uses in-memory database. Can be a file path like 'database.db' or None for ':memory:' |
Variables
| Variable | Type | Description |
|------------|-------|----------------------------------------------------|
|
encoding
| str | Character encoding for queries, defaults to 'utf8' |
|
open
| bool | True if connection is open, False if closed |
Examples
```pycon
In-memory database
conn = Connection()
cursor = conn.cursor()
cursor.execute("SELECT 1")
result = cursor.fetchall()
conn.close()
```
```pycon
File-based database
conn = Connection('mydata.db')
with conn.cursor() as cur:
... cur.execute("CREATE TABLE users (id INT, name STRING) ENGINE = MergeTree() order by id")
... cur.execute("INSERT INTO users VALUES (1, 'Alice')")
conn.close()
```
```pycon
Context manager usage
with Connection() as cur:
... cur.execute("SELECT version()")
... version = cur.fetchone()
```
:::note
ClickHouse does not support traditional transactions, so commit() and rollback()
operations are no-ops but provided for DB-API compliance.
:::
close
{#dbapi-connection-close}
Close the database connection.
Closes the underlying chDB connection and marks this connection as closed.
Subsequent operations on this connection will raise an Error.
Syntax
python
close()
Raises | {"source_file": "python.md"} | [
0.009582816623151302,
0.010443675331771374,
-0.08671539276838303,
0.03268714249134064,
-0.09742598235607147,
-0.05636138468980789,
0.03814636543393135,
0.0005732105346396565,
-0.058080438524484634,
-0.03573901951313019,
-0.08719991892576218,
-0.08410216867923737,
0.10345574468374252,
-0.01... |
384ea2f2-521f-4cb0-bdbc-1735b6ba64fc | Closes the underlying chDB connection and marks this connection as closed.
Subsequent operations on this connection will raise an Error.
Syntax
python
close()
Raises
| Exception | Condition |
|--------------------------------------|---------------------------------|
|
err.Error
| If connection is already closed |
commit
{#dbapi-commit}
Commit the current transaction.
Syntax
python
commit()
:::note
This is a no-op for chDB/ClickHouse as it doesn’t support traditional
transactions. Provided for DB-API 2.0 compliance.
:::
cursor
{#dbapi-cursor}
Create a new cursor for executing queries.
Syntax
python
cursor(cursor=None)
Parameters
| Parameter | Type | Description |
|------------|-------|-------------------------------------|
|
cursor
| - | Ignored, provided for compatibility |
Returns
| Return Type | Description |
|--------------|---------------------------------------|
|
Cursor
| New cursor object for this connection |
Raises
| Exception | Condition |
|--------------------------------------|-------------------------|
|
err.Error
| If connection is closed |
Example
```pycon
conn = Connection()
cur = conn.cursor()
cur.execute("SELECT 1")
result = cur.fetchone()
```
escape
{#escape}
Escape a value for safe inclusion in SQL queries.
Syntax
python
escape(obj, mapping=None)
Parameters
| Parameter | Type | Description |
|------------|-------|-----------------------------------------------|
|
obj
| - | Value to escape (string, bytes, number, etc.) |
|
mapping
| - | Optional character mapping for escaping |
Returns
| Return Type | Description |
|--------------|-------------------------------------------------------|
| - | Escaped version of the input suitable for SQL queries |
Example
```pycon
conn = Connection()
safe_value = conn.escape("O'Reilly")
query = f"SELECT * FROM users WHERE name = {safe_value}"
```
escape_string
{#escape-string}
Escape a string value for SQL queries.
Syntax
python
escape_string(s)
Parameters
| Parameter | Type | Description |
|------------|-------|------------------|
|
s
| str | String to escape |
Returns
| Return Type | Description |
|--------------|---------------------------------------|
|
str
| Escaped string safe for SQL inclusion |
property open
{#property-open}
Check if the connection is open.
Returns
| Return Type | Description |
|--------------|---------------------------------------------|
|
bool
| True if connection is open, False if closed |
query
{#dbapi-query}
Execute a SQL query directly and return raw results. | {"source_file": "python.md"} | [
-0.034045107662677765,
-0.017155112698674202,
-0.043683793395757675,
0.054497066885232925,
-0.10164541006088257,
-0.059341784566640854,
0.030468512326478958,
-0.024628575891256332,
0.030854126438498497,
0.037873707711696625,
0.01768646202981472,
-0.06443097442388535,
0.059291284531354904,
... |
33303a3e-692e-417a-bf17-94a523e2dfac | query
{#dbapi-query}
Execute a SQL query directly and return raw results.
This method bypasses the cursor interface and executes queries directly.
For standard DB-API usage, prefer using cursor() method.
Syntax
python
query(sql, fmt='CSV')
Parameters:
| Parameter | Type | Default | Description |
|------------|--------------|------------|----------------------------------------------------------------------------------|
|
sql
| str or bytes |
required
| SQL query to execute |
|
fmt
| str |
"CSV"
| Output format. Supported formats include "CSV", "JSON", "Arrow", "Parquet", etc. |
Returns
| Return Type | Description |
|--------------|--------------------------------------|
| - | Query result in the specified format |
Raises
| Exception | Condition |
|--------------------------------------------------------|----------------------------------------|
|
err.InterfaceError
| If connection is closed or query fails |
Example
```pycon
conn = Connection()
result = conn.query("SELECT 1, 'hello'", "CSV")
print(result)
"1,hello\n"
```
property resp
{#property-resp}
Get the last query response.
Returns
| Return Type | Description |
|--------------|---------------------------------------------|
| - | The raw response from the last query() call |
:::note
This property is updated each time query() is called directly.
It does not reflect queries executed through cursors.
:::
rollback
{#rollback}
Roll back the current transaction.
Syntax
python
rollback()
:::note
This is a no-op for chDB/ClickHouse as it doesn’t support traditional
transactions. Provided for DB-API 2.0 compliance.
:::
Cursor Class {#cursor-class}
class
chdb.dbapi.cursors.Cursor
{#chdb-dbapi-cursors-cursor}
Bases:
object
DB-API 2.0 cursor for executing queries and fetching results.
The cursor provides methods for executing SQL statements, managing query results,
and navigating through result sets. It supports parameter binding, bulk operations,
and follows DB-API 2.0 specifications.
Do not create Cursor instances directly. Use
Connection.cursor()
instead.
python
class chdb.dbapi.cursors.Cursor(connection) | {"source_file": "python.md"} | [
-0.041249144822359085,
0.03336047753691673,
-0.08131381869316101,
0.06544096767902374,
-0.09849657118320465,
-0.071687713265419,
0.02365410514175892,
0.032034747302532196,
-0.02148045413196087,
-0.03624138608574867,
0.0077418177388608456,
-0.08073535561561584,
0.11224034428596497,
-0.10367... |
523e04bd-42d9-4fbc-b4f2-392458091087 | Do not create Cursor instances directly. Use
Connection.cursor()
instead.
python
class chdb.dbapi.cursors.Cursor(connection)
| Variable | Type | Description |
|-------------------|-------|-------------------------------------------------------------|
|
description
| tuple | Column metadata for the last query result |
|
rowcount
| int | Number of rows affected by the last query (-1 if unknown) |
|
arraysize
| int | Default number of rows to fetch at once (default: 1) |
|
lastrowid
| - | ID of the last inserted row (if applicable) |
|
max_stmt_length
| int | Maximum statement size for executemany() (default: 1024000) |
Examples
```pycon
conn = Connection()
cur = conn.cursor()
cur.execute("SELECT 1 as id, 'test' as name")
result = cur.fetchone()
print(result) # (1, 'test')
cur.close()
```
:::note
See
DB-API 2.0 Cursor Objects
for complete specification details.
:::
callproc
{#callproc}
Execute a stored procedure (placeholder implementation).
Syntax
python
callproc(procname, args=())
Parameters
| Parameter | Type | Description |
|------------|----------|-------------------------------------|
|
procname
| str | Name of stored procedure to execute |
|
args
| sequence | Parameters to pass to the procedure |
Returns
| Return Type | Description |
|--------------|------------------------------------------|
|
sequence
| The original args parameter (unmodified) |
:::note
chDB/ClickHouse does not support stored procedures in the traditional sense.
This method is provided for DB-API 2.0 compliance but does not perform
any actual operation. Use execute() for all SQL operations.
:::
:::warning Compatibility
This is a placeholder implementation. Traditional stored procedure
features like OUT/INOUT parameters, multiple result sets, and server
variables are not supported by the underlying ClickHouse engine.
:::
close
{#dbapi-cursor-close}
Close the cursor and free associated resources.
After closing, the cursor becomes unusable and any operation will raise an exception.
Closing a cursor exhausts all remaining data and releases the underlying cursor.
Syntax
python
close()
execute
{#dbapi-execute}
Execute a SQL query with optional parameter binding.
This method executes a single SQL statement with optional parameter substitution.
It supports multiple parameter placeholder styles for flexibility.
Syntax
python
execute(query, args=None)
Parameters | {"source_file": "python.md"} | [
0.012076172046363354,
-0.034512683749198914,
-0.07130543887615204,
0.015253953635692596,
-0.14585784077644348,
-0.023543208837509155,
0.03321864828467369,
0.03499603271484375,
-0.050012681633234024,
0.014217356219887733,
0.010428289882838726,
-0.04157785698771477,
0.11804893612861633,
-0.1... |
5c1f54bc-feaa-4575-8fb0-a1817851d923 | Syntax
python
execute(query, args=None)
Parameters
| Parameter | Type | Default | Description |
|------------|-----------------|------------|------------------------------------|
|
query
| str |
required
| SQL query to execute |
|
args
| tuple/list/dict |
None
| Parameters to bind to placeholders |
Returns
| Return Type | Description |
|--------------|-----------------------------------------|
|
int
| Number of affected rows (-1 if unknown) |
Parameter Styles
| Style | Example |
|---------------------|-------------------------------------------------|
| Question mark style |
"SELECT * FROM users WHERE id = ?"
|
| Named style |
"SELECT * FROM users WHERE name = %(name)s"
|
| Format style |
"SELECT * FROM users WHERE age = %s"
(legacy) |
Examples
```pycon
Question mark parameters
cur.execute("SELECT * FROM users WHERE id = ? AND age > ?", (123, 18))
Named parameters
cur.execute("SELECT * FROM users WHERE name = %(name)s", {'name': 'Alice'})
No parameters
cur.execute("SELECT COUNT(*) FROM users")
```
Raises
| Exception | Condition |
|--------------------------------------------------------|-------------------------------------------|
|
ProgrammingError
| If cursor is closed or query is malformed |
|
InterfaceError
| If database error occurs during execution |
executemany(query, args)
{#chdb-dbapi-cursors-cursor-executemany}
Execute a query multiple times with different parameter sets.
This method efficiently executes the same SQL query multiple times with
different parameter values. It’s particularly useful for bulk INSERT operations.
Syntax
python
executemany(query, args)
Parameters
| Parameter | Type | Description |
|------------|----------|-------------------------------------------------------------|
|
query
| str | SQL query to execute multiple times |
|
args
| sequence | Sequence of parameter tuples/dicts/lists for each execution |
Returns
| Return Type | Description |
|--------------|-----------------------------------------------------|
|
int
| Total number of affected rows across all executions |
Examples
```pycon
Bulk insert with question mark parameters
users_data = [(1, 'Alice'), (2, 'Bob'), (3, 'Charlie')]
cur.executemany("INSERT INTO users VALUES (?, ?)", users_data)
Bulk insert with named parameters
users_data = [
... {'id': 1, 'name': 'Alice'},
... {'id': 2, 'name': 'Bob'}
... ]
cur.executemany(
... "INSERT INTO users VALUES (%(id)s, %(name)s)",
... users_data
... )
``` | {"source_file": "python.md"} | [
-0.00009329172462457791,
0.018594320863485336,
0.01853363588452339,
0.09218963235616684,
-0.11094168573617935,
-0.085005983710289,
0.0828920528292656,
0.007372608873993158,
-0.07975833863019943,
-0.032406777143478394,
0.0018706565024331212,
-0.06398778408765793,
0.10141795873641968,
-0.111... |
867e4228-29d9-4b0d-9308-b957ee6ef2a7 | users_data = [
... {'id': 1, 'name': 'Alice'},
... {'id': 2, 'name': 'Bob'}
... ]
cur.executemany(
... "INSERT INTO users VALUES (%(id)s, %(name)s)",
... users_data
... )
```
:::note
This method improves performance for multiple-row INSERT and UPDATE operations
by optimizing the query execution process.
:::
fetchall()
{#dbapi-fetchall}
Fetch all remaining rows from the query result.
Syntax
python
fetchall()
Returns
| Return Type | Description |
|--------------|------------------------------------------------|
|
list
| List of tuples representing all remaining rows |
Raises
| Exception | Condition |
|--------------------------------------------------------|----------------------------------------|
|
ProgrammingError
| If execute() has not been called first |
:::warning Warning
This method can consume large amounts of memory for big result sets.
Consider using
fetchmany()
for large datasets.
:::
Example
```pycon
cursor.execute("SELECT id, name FROM users")
all_rows = cursor.fetchall()
print(len(all_rows)) # Number of total rows
```
fetchmany
{#dbapi-fetchmany}
Fetch multiple rows from the query result.
Syntax
python
fetchmany(size=1)
Parameters
| Parameter | Type | Default | Description |
|------------|-------|----------|------------------------------------------------------------------|
|
size
| int |
1
| Number of rows to fetch. If not specified, uses cursor.arraysize |
Returns
| Return Type | Description |
|--------------|----------------------------------------------|
|
list
| List of tuples representing the fetched rows |
Raises
| Exception | Condition |
|--------------------------------------------------------|----------------------------------------|
|
ProgrammingError
| If execute() has not been called first |
Example
```pycon
cursor.execute("SELECT id, name FROM users")
rows = cursor.fetchmany(3)
print(rows) # [(1, 'Alice'), (2, 'Bob'), (3, 'Charlie')]
```
fetchone
{#dbapi-fetchone}
Fetch the next row from the query result.
Syntax
python
fetchone()
Returns
| Return Type | Description |
|-----------------|--------------------------------------------------------|
|
tuple or None
| Next row as a tuple, or None if no more rows available |
Raises
| Exception | Condition |
|--------------------------------------------------------|----------------------------------------|
|
ProgrammingError
| If
execute()
has not been called first |
Example
```pycon | {"source_file": "python.md"} | [
-0.007554670330137014,
0.0559835359454155,
0.011394369415938854,
0.03602908179163933,
-0.09911871701478958,
-0.060208819806575775,
0.004159801173955202,
0.006437450181692839,
-0.07928331941366196,
0.002446887083351612,
0.08470343798398972,
-0.05930500105023384,
0.10937108099460602,
-0.1241... |
a2b93c66-973f-4c2b-83ad-b3893f37276b | Example
```pycon
cursor.execute("SELECT id, name FROM users LIMIT 3")
row = cursor.fetchone()
print(row) # (1, 'Alice')
row = cursor.fetchone()
print(row) # (2, 'Bob')
```
max_stmt_length = 1024000
{#max-stmt-length}
Max statement size which
executemany()
generates.
Default value is 1024000.
mogrify
{#mogrify}
Return the exact query string that would be sent to the database.
This method shows the final SQL query after parameter substitution,
which is useful for debugging and logging purposes.
Syntax
python
mogrify(query, args=None)
Parameters
| Parameter | Type | Default | Description |
|------------|-----------------|------------|---------------------------------------|
|
query
| str |
required
| SQL query with parameter placeholders |
|
args
| tuple/list/dict |
None
| Parameters to substitute |
Returns
| Return Type | Description |
|--------------|--------------------------------------------------------|
|
str
| The final SQL query string with parameters substituted |
Example
```pycon
cur.mogrify("SELECT * FROM users WHERE id = ?", (123,))
"SELECT * FROM users WHERE id = 123"
```
:::note
This method follows the extension to DB-API 2.0 used by Psycopg.
:::
nextset
{#nextset}
Move to the next result set (not supported).
Syntax
python
nextset()
Returns
| Return Type | Description |
|--------------|---------------------------------------------------------------|
|
None
| Always returns None as multiple result sets are not supported |
:::note
chDB/ClickHouse does not support multiple result sets from a single query.
This method is provided for DB-API 2.0 compliance but always returns None.
:::
setinputsizes
{#setinputsizes}
Set input sizes for parameters (no-op implementation).
Syntax
python
setinputsizes(*args)
Parameters
| Parameter | Type | Description |
|------------|-------|-----------------------------------------|
|
*args
| - | Parameter size specifications (ignored) |
:::note
This method does nothing but is required by DB-API 2.0 specification.
chDB automatically handles parameter sizing internally.
:::
setoutputsizes
{#setoutputsizes}
Set output column sizes (no-op implementation).
Syntax
python
setoutputsizes(*args)
Parameters
| Parameter | Type | Description |
|------------|-------|--------------------------------------|
|
*args
| - | Column size specifications (ignored) |
:::note
This method does nothing but is required by DB-API 2.0 specification.
chDB automatically handles output sizing internally.
:::
Error Classes {#error-classes}
Exception classes for chdb database operations. | {"source_file": "python.md"} | [
-0.02547820843756199,
-0.011105394922196865,
-0.06907651573419571,
0.047021619975566864,
-0.11079291999340057,
-0.09637417644262314,
0.054148562252521515,
0.07721880078315735,
-0.0628315731883049,
-0.03819776698946953,
0.025696352124214172,
-0.00868234597146511,
0.08225181698799133,
-0.123... |
1c4cfcb7-f71c-4eca-9bfc-a8ead98db631 | Error Classes {#error-classes}
Exception classes for chdb database operations.
This module provides a complete hierarchy of exception classes for handling
database-related errors in chdb, following the Python Database API Specification v2.0.
The exception hierarchy is structured as follows:
default
StandardError
├── Warning
└── Error
├── InterfaceError
└── DatabaseError
├── DataError
├── OperationalError
├── IntegrityError
├── InternalError
├── ProgrammingError
└── NotSupportedError
Each exception class represents a specific category of database errors:
| Exception | Description |
|---------------------|-------------------------------------------------------------|
|
Warning
| Non-fatal warnings during database operations |
|
InterfaceError
| Problems with the database interface itself |
|
DatabaseError
| Base class for all database-related errors |
|
DataError
| Problems with data processing (invalid values, type errors) |
|
OperationalError
| Database operational issues (connectivity, resources) |
|
IntegrityError
| Constraint violations (foreign keys, uniqueness) |
|
InternalError
| Database internal errors and corruption |
|
ProgrammingError
| SQL syntax errors and API misuse |
|
NotSupportedError
| Unsupported features or operations |
:::note
These exception classes are compliant with Python DB API 2.0 specification
and provide consistent error handling across different database operations.
:::
See also
-
Python Database API Specification v2.0
-
chdb.dbapi.connections
- Database connection management
-
chdb.dbapi.cursors
- Database cursor operations
Examples
```pycon
try:
... cursor.execute("SELECT * FROM nonexistent_table")
... except ProgrammingError as e:
... print(f"SQL Error: {e}")
...
SQL Error: Table 'nonexistent_table' doesn't exist
```
```pycon
try:
... cursor.execute("INSERT INTO users (id) VALUES (1), (1)")
... except IntegrityError as e:
... print(f"Constraint violation: {e}")
...
Constraint violation: Duplicate entry '1' for key 'PRIMARY'
```
exception
chdb.dbapi.err.DataError
{#chdb-dbapi-err-dataerror}
Bases:
DatabaseError
Exception raised for errors that are due to problems with the processed data.
This exception is raised when database operations fail due to issues with
the data being processed, such as:
Division by zero operations
Numeric values out of range
Invalid date/time values
String truncation errors
Type conversion failures
Invalid data format for column type
Raises
| Exception | Condition |
|-----------|-----------|
|
DataError
| When data validation or processing fails |
Examples
```pycon | {"source_file": "python.md"} | [
-0.006015187595039606,
0.017059912905097008,
-0.02316146530210972,
0.06142289936542511,
-0.04931717365980148,
-0.04232587292790413,
-0.00784290675073862,
0.07201298326253891,
-0.08692808449268341,
-0.023928748443722725,
-0.00984626542776823,
-0.05401550605893135,
0.11720214039087296,
-0.01... |
6cda93f1-c5b1-4dae-aae6-55d8611824b9 | Invalid data format for column type
Raises
| Exception | Condition |
|-----------|-----------|
|
DataError
| When data validation or processing fails |
Examples
```pycon
Division by zero in SQL
cursor.execute("SELECT 1/0")
DataError: Division by zero
```
```pycon
Invalid date format
cursor.execute("INSERT INTO table VALUES ('invalid-date')")
DataError: Invalid date format
```
exception
chdb.dbapi.err.DatabaseError
{#chdb-dbapi-err-databaseerror}
Bases:
Error
Exception raised for errors that are related to the database.
This is the base class for all database-related errors. It encompasses
all errors that occur during database operations and are related to the
database itself rather than the interface.
Common scenarios include:
SQL execution errors
Database connectivity issues
Transaction-related problems
Database-specific constraints violations
:::note
This serves as the parent class for more specific database error types
such as
DataError
,
OperationalError
, etc.
:::
exception
chdb.dbapi.err.Error
{#chdb-dbapi-err-error}
Bases:
StandardError
Exception that is the base class of all other error exceptions (not Warning).
This is the base class for all error exceptions in chdb, excluding warnings.
It serves as the parent class for all database error conditions that prevent
successful completion of operations.
:::note
This exception hierarchy follows the Python DB API 2.0 specification.
:::
See also
-
Warning
- For non-fatal warnings that don’t prevent operation completion
exception
chdb.dbapi.err.IntegrityError
{#chdb-dbapi-err-integrityerror}
Bases:
DatabaseError
Exception raised when the relational integrity of the database is affected.
This exception is raised when database operations violate integrity constraints,
including:
Foreign key constraint violations
Primary key or unique constraint violations (duplicate keys)
Check constraint violations
NOT NULL constraint violations
Referential integrity violations
Raises
| Exception | Condition |
|----------------------------------------------------|--------------------------------------------------|
|
IntegrityError
| When database integrity constraints are violated |
Examples
```pycon
Duplicate primary key
cursor.execute("INSERT INTO users (id, name) VALUES (1, 'John')")
cursor.execute("INSERT INTO users (id, name) VALUES (1, 'Jane')")
IntegrityError: Duplicate entry '1' for key 'PRIMARY'
```
```pycon
Foreign key violation
cursor.execute("INSERT INTO orders (user_id) VALUES (999)")
IntegrityError: Cannot add or update a child row: foreign key constraint fails
```
exception
chdb.dbapi.err.InterfaceError
{#chdb-dbapi-err-interfaceerror}
Bases:
Error | {"source_file": "python.md"} | [
-0.0360477976500988,
-0.008425775915384293,
-0.02169014699757099,
0.025046948343515396,
-0.039689794182777405,
-0.042709216475486755,
-0.02476726472377777,
0.06976175308227539,
-0.07263033092021942,
0.002262918744236231,
0.059381332248449326,
-0.040954042226076126,
0.10252410173416138,
-0.... |
2cc27b9e-b9c6-4cbe-a024-03adab742e96 | exception
chdb.dbapi.err.InterfaceError
{#chdb-dbapi-err-interfaceerror}
Bases:
Error
Exception raised for errors that are related to the database interface rather than the database itself.
This exception is raised when there are problems with the database interface
implementation, such as:
Invalid connection parameters
API misuse (calling methods on closed connections)
Interface-level protocol errors
Module import or initialization failures
Raises
| Exception | Condition |
|----------------------------------------------------|----------------------------------------------------------------------------|
|
InterfaceError
| When database interface encounters errors unrelated to database operations |
:::note
These errors are typically programming errors or configuration issues
that can be resolved by fixing the client code or configuration.
:::
exception
chdb.dbapi.err.InternalError
{#chdb-dbapi-err-internalerror}
Bases:
DatabaseError
Exception raised when the database encounters an internal error.
This exception is raised when the database system encounters internal
errors that are not caused by the application, such as:
Invalid cursor state (cursor is not valid anymore)
Transaction state inconsistencies (transaction is out of sync)
Database corruption issues
Internal data structure corruption
System-level database errors
Raises
| Exception | Condition |
|-----------|-----------|
|
InternalError
| When database encounters internal inconsistencies |
:::warning Warning
Internal errors may indicate serious database problems that require
database administrator attention. These errors are typically not
recoverable through application-level retry logic.
:::
:::note
These errors are generally outside the control of the application
and may require database restart or repair operations.
:::
exception
chdb.dbapi.err.NotSupportedError
{#chdb-dbapi-err-notsupportederror}
Bases:
DatabaseError
Exception raised when a method or database API is not supported.
This exception is raised when the application attempts to use database
features or API methods that are not supported by the current database
configuration or version, such as:
Requesting
rollback()
on connections without transaction support
Using advanced SQL features not supported by the database version
Calling methods not implemented by the current driver
Attempting to use disabled database features
Raises
| Exception | Condition |
|----------------------------------------------------------|-------------------------------------------------|
|
NotSupportedError
| When unsupported database features are accessed |
Examples
```pycon
Transaction rollback on non-transactional connection | {"source_file": "python.md"} | [
-0.03850410506129265,
0.0054543279111385345,
-0.015561972744762897,
0.046354152262210846,
-0.07760657370090485,
-0.009042483754456043,
-0.0018888245103880763,
0.07154889404773712,
-0.06928479671478271,
-0.010570893064141273,
0.013851991854608059,
-0.1086684837937355,
0.13749799132347107,
-... |
5f0395b1-5e77-4309-8d21-465dcb456e06 | Examples
```pycon
Transaction rollback on non-transactional connection
connection.rollback()
NotSupportedError: Transactions are not supported
```
```pycon
Using unsupported SQL syntax
cursor.execute("SELECT * FROM table WITH (NOLOCK)")
NotSupportedError: WITH clause not supported in this database version
```
:::note
Check database documentation and driver capabilities to avoid
these errors. Consider graceful fallbacks where possible.
:::
exception
chdb.dbapi.err.OperationalError
{#chdb-dbapi-err-operationalerror}
Bases:
DatabaseError
Exception raised for errors that are related to the database’s operation.
This exception is raised for errors that occur during database operation
and are not necessarily under the control of the programmer, including:
Unexpected disconnection from database
Database server not found or unreachable
Transaction processing failures
Memory allocation errors during processing
Disk space or resource exhaustion
Database server internal errors
Authentication or authorization failures
Raises
| Exception | Condition |
|--------------------------------------------------------|---------------------------------------------------------|
|
OperationalError
| When database operations fail due to operational issues |
:::note
These errors are typically transient and may be resolved by retrying
the operation or addressing system-level issues.
:::
:::warning Warning
Some operational errors may indicate serious system problems that
require administrative intervention.
:::
exception
chdb.dbapi.err.ProgrammingError
{#chdb-dbapi-err-programmingerror}
Bases:
DatabaseError
Exception raised for programming errors in database operations.
This exception is raised when there are programming errors in the
application’s database usage, including:
Table or column not found
Table or index already exists when creating
SQL syntax errors in statements
Wrong number of parameters specified in prepared statements
Invalid SQL operations (e.g., DROP on non-existent objects)
Incorrect usage of database API methods
Raises
| Exception | Condition |
|--------------------------------------------------------|--------------------------------------------------|
|
ProgrammingError
| When SQL statements or API usage contains errors |
Examples
```pycon
Table not found
cursor.execute("SELECT * FROM nonexistent_table")
ProgrammingError: Table 'nonexistent_table' doesn't exist
```
```pycon
SQL syntax error
cursor.execute("SELCT * FROM users")
ProgrammingError: You have an error in your SQL syntax
```
```pycon
Wrong parameter count | {"source_file": "python.md"} | [
-0.06840597838163376,
0.003996450453996658,
-0.09196857362985611,
0.038926683366298676,
-0.04445945471525192,
-0.023640844970941544,
0.01970013789832592,
0.03736705705523491,
-0.04378049820661545,
0.00790480338037014,
0.09178613871335983,
0.059532564133405685,
0.10561101883649826,
-0.01508... |
261aa0ab-ba8d-4da8-90ba-b349fc08d4c4 | ```pycon
SQL syntax error
cursor.execute("SELCT * FROM users")
ProgrammingError: You have an error in your SQL syntax
```
```pycon
Wrong parameter count
cursor.execute("INSERT INTO users (name, age) VALUES (%s)", ('John',))
ProgrammingError: Column count doesn't match value count
```
exception
chdb.dbapi.err.StandardError
{#chdb-dbapi-err-standarderror}
Bases:
Exception
Exception related to operation with chdb.
This is the base class for all chdb-related exceptions. It inherits from
Python’s built-in Exception class and serves as the root of the exception
hierarchy for database operations.
:::note
This exception class follows the Python DB API 2.0 specification
for database exception handling.
:::
exception
chdb.dbapi.err.Warning
{#chdb-dbapi-err-warning}
Bases:
StandardError
Exception raised for important warnings like data truncations while inserting, etc.
This exception is raised when the database operation completes but with
important warnings that should be brought to the attention of the application.
Common scenarios include:
Data truncation during insertion
Precision loss in numeric conversions
Character set conversion warnings
:::note
This follows the Python DB API 2.0 specification for warning exceptions.
:::
Module Constants {#module-constants}
chdb.dbapi.apilevel = '2.0'
{#apilevel}
python
str(object=’’) -> str
str(bytes_or_buffer[, encoding[, errors]]) -> str
Create a new string object from the given object. If encoding or
errors is specified, then the object must expose a data buffer
that will be decoded using the given encoding and error handler.
Otherwise, returns the result of
object._\_str_\_()
(if defined)
or
repr(object)
.
encoding defaults to ‘utf-8’.
errors defaults to ‘strict’.
chdb.dbapi.threadsafety = 1
{#threadsafety}
python
int([x]) -> integer
int(x, base=10) -> integer
Convert a number or string to an integer, or return 0 if no arguments
are given. If x is a number, return x.
_int
_(). For floating-point
numbers, this truncates towards zero.
If x is not a number or if base is given, then x must be a string,
bytes, or bytearray instance representing an integer literal in the
given base. The literal can be preceded by ‘+’ or ‘-’ and be surrounded
by whitespace. The base defaults to 10. Valid bases are 0 and 2-36.
Base 0 means to interpret the base from the string as an integer literal.
```python
int(‘0b100’, base=0)
4
```
chdb.dbapi.paramstyle = 'format'
{#paramstyle}
python
str(object=’’) -> str
str(bytes_or_buffer[, encoding[, errors]]) -> str
Create a new string object from the given object. If encoding or
errors is specified, then the object must expose a data buffer
that will be decoded using the given encoding and error handler.
Otherwise, returns the result of object.
_str
_() (if defined)
or repr(object).
encoding defaults to ‘utf-8’.
errors defaults to ‘strict’. | {"source_file": "python.md"} | [
-0.009440629743039608,
-0.013860524632036686,
-0.04622981324791908,
0.032485511153936386,
-0.0921768844127655,
0.0009022600133903325,
0.045412641018629074,
0.09779788553714752,
-0.05808212608098984,
-0.017825789749622345,
0.07266199588775635,
-0.05122467875480652,
0.15296834707260132,
-0.0... |
5782d791-48f3-4da3-b002-fcab2e26ad6d | Type Constants {#type-constants}
chdb.dbapi.STRING = frozenset({247, 253, 254})
{#string-type}
Extended frozenset for DB-API 2.0 type comparison.
This class extends frozenset to support DB-API 2.0 type comparison semantics.
It allows for flexible type checking where individual items can be compared
against the set using both equality and inequality operators.
This is used for type constants like STRING, BINARY, NUMBER, etc. to enable
comparisons like “field_type == STRING” where field_type is a single type value.
Examples
```pycon
string_types = DBAPISet([FIELD_TYPE.STRING, FIELD_TYPE.VAR_STRING])
FIELD_TYPE.STRING == string_types # Returns True
FIELD_TYPE.INT != string_types # Returns True
FIELD_TYPE.BLOB in string_types # Returns False
```
chdb.dbapi.BINARY = frozenset({249, 250, 251, 252})
{#binary-type}
Extended frozenset for DB-API 2.0 type comparison.
This class extends frozenset to support DB-API 2.0 type comparison semantics.
It allows for flexible type checking where individual items can be compared
against the set using both equality and inequality operators.
This is used for type constants like STRING, BINARY, NUMBER, etc. to enable
comparisons like “field_type == STRING” where field_type is a single type value.
Examples
```pycon
string_types = DBAPISet([FIELD_TYPE.STRING, FIELD_TYPE.VAR_STRING])
FIELD_TYPE.STRING == string_types # Returns True
FIELD_TYPE.INT != string_types # Returns True
FIELD_TYPE.BLOB in string_types # Returns False
```
chdb.dbapi.NUMBER = frozenset({0, 1, 3, 4, 5, 8, 9, 13})
{#number-type}
Extended frozenset for DB-API 2.0 type comparison.
This class extends frozenset to support DB-API 2.0 type comparison semantics.
It allows for flexible type checking where individual items can be compared
against the set using both equality and inequality operators.
This is used for type constants like STRING, BINARY, NUMBER, etc. to enable
comparisons like “field_type == STRING” where field_type is a single type value.
Examples
```pycon
string_types = DBAPISet([FIELD_TYPE.STRING, FIELD_TYPE.VAR_STRING])
FIELD_TYPE.STRING == string_types # Returns True
FIELD_TYPE.INT != string_types # Returns True
FIELD_TYPE.BLOB in string_types # Returns False
```
chdb.dbapi.DATE = frozenset({10, 14})
{#date-type}
Extended frozenset for DB-API 2.0 type comparison.
This class extends frozenset to support DB-API 2.0 type comparison semantics.
It allows for flexible type checking where individual items can be compared
against the set using both equality and inequality operators.
This is used for type constants like STRING, BINARY, NUMBER, etc. to enable
comparisons like “field_type == STRING” where field_type is a single type value.
Examples
```pycon | {"source_file": "python.md"} | [
0.00039438632666133344,
0.01672104187309742,
-0.047131143510341644,
0.04816793277859688,
-0.03575773537158966,
-0.026542747393250465,
0.025086132809519768,
0.0501868762075901,
-0.05626466125249863,
-0.04354533925652504,
-0.015984538942575455,
-0.015032009221613407,
0.020406821742653847,
-0... |
1b830b5b-088c-4818-8790-1725a0d91adc | This is used for type constants like STRING, BINARY, NUMBER, etc. to enable
comparisons like “field_type == STRING” where field_type is a single type value.
Examples
```pycon
string_types = DBAPISet([FIELD_TYPE.STRING, FIELD_TYPE.VAR_STRING])
FIELD_TYPE.STRING == string_types # Returns True
FIELD_TYPE.INT != string_types # Returns True
FIELD_TYPE.BLOB in string_types # Returns False
```
chdb.dbapi.TIME = frozenset({11})
{#time-type}
Extended frozenset for DB-API 2.0 type comparison.
This class extends frozenset to support DB-API 2.0 type comparison semantics.
It allows for flexible type checking where individual items can be compared
against the set using both equality and inequality operators.
This is used for type constants like STRING, BINARY, NUMBER, etc. to enable
comparisons like “field_type == STRING” where field_type is a single type value.
Examples
```pycon
string_types = DBAPISet([FIELD_TYPE.STRING, FIELD_TYPE.VAR_STRING])
FIELD_TYPE.STRING == string_types # Returns True
FIELD_TYPE.INT != string_types # Returns True
FIELD_TYPE.BLOB in string_types # Returns False
```
chdb.dbapi.TIMESTAMP = frozenset({7, 12})
{#timestamp-type}
Extended frozenset for DB-API 2.0 type comparison.
This class extends frozenset to support DB-API 2.0 type comparison semantics.
It allows for flexible type checking where individual items can be compared
against the set using both equality and inequality operators.
This is used for type constants like STRING, BINARY, NUMBER, etc. to enable
comparisons like “field_type == STRING” where field_type is a single type value.
Examples
```pycon
string_types = DBAPISet([FIELD_TYPE.STRING, FIELD_TYPE.VAR_STRING])
FIELD_TYPE.STRING == string_types # Returns True
FIELD_TYPE.INT != string_types # Returns True
FIELD_TYPE.BLOB in string_types # Returns False
```
chdb.dbapi.DATETIME = frozenset({7, 12})
{#datetime-type}
Extended frozenset for DB-API 2.0 type comparison.
This class extends frozenset to support DB-API 2.0 type comparison semantics.
It allows for flexible type checking where individual items can be compared
against the set using both equality and inequality operators.
This is used for type constants like STRING, BINARY, NUMBER, etc. to enable
comparisons like “field_type == STRING” where field_type is a single type value.
Examples
```pycon
string_types = DBAPISet([FIELD_TYPE.STRING, FIELD_TYPE.VAR_STRING])
FIELD_TYPE.STRING == string_types # Returns True
FIELD_TYPE.INT != string_types # Returns True
FIELD_TYPE.BLOB in string_types # Returns False
```
chdb.dbapi.ROWID = frozenset({})
{#rowid-type}
Extended frozenset for DB-API 2.0 type comparison.
This class extends frozenset to support DB-API 2.0 type comparison semantics.
It allows for flexible type checking where individual items can be compared
against the set using both equality and inequality operators. | {"source_file": "python.md"} | [
-0.033806703984737396,
0.02109895460307598,
-0.04717836156487465,
0.068489208817482,
-0.04057744890451431,
-0.025615757331252098,
0.04038405790925026,
0.06229034438729286,
-0.03935282304883003,
-0.0475778691470623,
-0.016361840069293976,
-0.006063074339181185,
0.008160470984876156,
-0.0255... |
99f56927-081f-4e3c-91be-eebd3a8957fe | This is used for type constants like STRING, BINARY, NUMBER, etc. to enable
comparisons like “field_type == STRING” where field_type is a single type value.
Examples
```pycon
string_types = DBAPISet([FIELD_TYPE.STRING, FIELD_TYPE.VAR_STRING])
FIELD_TYPE.STRING == string_types # Returns True
FIELD_TYPE.INT != string_types # Returns True
FIELD_TYPE.BLOB in string_types # Returns False
```
Usage Examples
Basic Query Example:
```python
import chdb.dbapi as dbapi
print("chdb driver version: {0}".format(dbapi.get_client_info()))
Create connection and cursor
conn = dbapi.connect()
cur = conn.cursor()
Execute query
cur.execute('SELECT version()')
print("description:", cur.description)
print("data:", cur.fetchone())
Clean up
cur.close()
conn.close()
```
Working with Data:
```python
import chdb.dbapi as dbapi
conn = dbapi.connect()
cur = conn.cursor()
Create table
cur.execute("""
CREATE TABLE employees (
id UInt32,
name String,
department String,
salary Decimal(10,2)
) ENGINE = Memory
""")
Insert data
cur.execute("""
INSERT INTO employees VALUES
(1, 'Alice', 'Engineering', 75000.00),
(2, 'Bob', 'Marketing', 65000.00),
(3, 'Charlie', 'Engineering', 80000.00)
""")
Query data
cur.execute("SELECT * FROM employees WHERE department = 'Engineering'")
Fetch results
print("Column names:", [desc[0] for desc in cur.description])
for row in cur.fetchall():
print(row)
conn.close()
```
Connection Management:
```python
import chdb.dbapi as dbapi
In-memory database (default)
conn1 = dbapi.connect()
Persistent database file
conn2 = dbapi.connect("./my_database.chdb")
Connection with parameters
conn3 = dbapi.connect("./my_database.chdb?log-level=debug&verbose")
Read-only connection
conn4 = dbapi.connect("./my_database.chdb?mode=ro")
Automatic connection cleanup
with dbapi.connect("test.chdb") as conn:
cur = conn.cursor()
cur.execute("SELECT count() FROM numbers(1000)")
result = cur.fetchone()
print(f"Count: {result[0]}")
cur.close()
```
Best Practices
Connection Management
: Always close connections and cursors when done
Context Managers
: Use
with
statements for automatic cleanup
Batch Processing
: Use
fetchmany()
for large result sets
Error Handling
: Wrap database operations in try-except blocks
Parameter Binding
: Use parameterized queries when possible
Memory Management
: Avoid
fetchall()
for very large datasets
:::note
- chDB’s DB-API 2.0 interface is compatible with most Python database tools
- The interface provides Level 1 thread safety (threads may share modules but not connections)
- Connection strings support the same parameters as chDB sessions
- All standard DB-API 2.0 exceptions are supported
::: | {"source_file": "python.md"} | [
0.010179124772548676,
0.046324826776981354,
-0.018420707434415817,
0.05028526112437248,
-0.06127801537513733,
-0.04643147066235542,
0.04731494560837746,
0.027780571952462196,
-0.028006786480545998,
-0.07792527973651886,
-0.015004167333245277,
-0.049872707575559616,
0.055682916194200516,
-0... |
12a22e78-20d7-4f4c-b816-2110cf3b9590 | :::warning Warning
- Always close cursors and connections to avoid resource leaks
- Large result sets should be processed in batches
- Parameter binding syntax follows format style:
%s
:::
User-Defined Functions (UDF) {#user-defined-functions}
User-defined functions module for chDB.
This module provides functionality for creating and managing user-defined functions (UDFs)
in chDB. It allows you to extend chDB’s capabilities by writing custom Python functions
that can be called from SQL queries.
chdb.udf.chdb_udf
{#chdb-udf}
Decorator for chDB Python UDF(User Defined Function).
Syntax
python
chdb.udf.chdb_udf(return_type='String')
Parameters
| Parameter | Type | Default | Description |
|---------------|-------|------------|-------------------------------------------------------------------------|
|
return_type
| str |
"String"
| Return type of the function. Should be one of the ClickHouse data types |
Notes
The function should be stateless. Only UDFs are supported, not UDAFs.
Default return type is String. The return type should be one of the ClickHouse data types.
The function should take in arguments of type String. All arguments are strings.
The function will be called for each line of input.
The function should be pure python function. Import all modules used IN THE FUNCTION.
Python interpreter used is the same as the one used to run the script.
Example
```python
@chdb_udf()
def sum_udf(lhs, rhs):
return int(lhs) + int(rhs)
@chdb_udf()
def func_use_json(arg):
import json
# ... use json module
```
chdb.udf.generate_udf
{#generate-udf}
Generate UDF configuration and executable script files.
This function creates the necessary files for a User Defined Function (UDF) in chDB:
1. A Python executable script that processes input data
2. An XML configuration file that registers the UDF with ClickHouse
Syntax
python
chdb.udf.generate_udf(func_name, args, return_type, udf_body)
Parameters
| Parameter | Type | Description |
|---------------|-------|---------------------------------------------|
|
func_name
| str | Name of the UDF function |
|
args
| list | List of argument names for the function |
|
return_type
| str | ClickHouse return type for the function |
|
udf_body
| str | Python source code body of the UDF function |
:::note
This function is typically called by the @chdb_udf decorator and should not
be called directly by users.
:::
Utilities {#utilities}
Utility functions and helpers for chDB.
This module contains various utility functions for working with chDB, including
data type inference, data conversion helpers, and debugging utilities.
chdb.utils.convert_to_columnar
{#convert-to-columnar}
Converts a list of dictionaries into a columnar format. | {"source_file": "python.md"} | [
0.016433950513601303,
0.01240190677344799,
-0.06512058526277542,
0.07834526151418686,
-0.0809219479560852,
-0.0007389571401290596,
0.07442539930343628,
0.08502207696437836,
-0.06422046571969986,
-0.008095018565654755,
-0.026032207533717155,
-0.11853164434432983,
0.05100052431225777,
-0.079... |
3a5593a5-1999-4521-8efa-7ec224c145ac | chdb.utils.convert_to_columnar
{#convert-to-columnar}
Converts a list of dictionaries into a columnar format.
This function takes a list of dictionaries and converts it into a dictionary
where each key corresponds to a column and each value is a list of column values.
Missing values in the dictionaries are represented as None.
Syntax
python
chdb.utils.convert_to_columnar(items: List[Dict[str, Any]]) → Dict[str, List[Any]]
Parameters
| Parameter | Type | Description |
|------------|------------------------|-----------------------------------|
|
items
|
List[Dict[str, Any]]
| A list of dictionaries to convert |
Returns
| Return Type | Description |
|------------------------|-----------------------------------------------------------------------------|
|
Dict[str, List[Any]]
| A dictionary with keys as column names and values as lists of column values |
Example
```pycon
items = [
... {"name": "Alice", "age": 30, "city": "New York"},
... {"name": "Bob", "age": 25},
... {"name": "Charlie", "city": "San Francisco"}
... ]
convert_to_columnar(items)
{
'name': ['Alice', 'Bob', 'Charlie'],
'age': [30, 25, None],
'city': ['New York', None, 'San Francisco']
}
```
chdb.utils.flatten_dict
{#flatten-dict}
Flattens a nested dictionary.
This function takes a nested dictionary and flattens it, concatenating nested keys
with a separator. Lists of dictionaries are serialized to JSON strings.
Syntax
python
chdb.utils.flatten_dict(d: Dict[str, Any], parent_key: str = '', sep: str = '_') → Dict[str, Any]
Parameters
| Parameter | Type | Default | Description |
|--------------|------------------|------------|------------------------------------------------|
|
d
|
Dict[str, Any]
|
required
| The dictionary to flatten |
|
parent_key
| str |
""
| The base key to prepend to each key |
|
sep
| str |
"_"
| The separator to use between concatenated keys |
Returns
| Return Type | Description |
|------------------|------------------------|
|
Dict[str, Any]
| A flattened dictionary |
Example
```pycon
nested_dict = {
... "a": 1,
... "b": {
... "c": 2,
... "d": {
... "e": 3
... }
... },
... "f": [4, 5, {"g": 6}],
... "h": [{"i": 7}, {"j": 8}]
... }
flatten_dict(nested_dict)
{
'a': 1,
'b_c': 2,
'b_d_e': 3,
'f_0': 4,
'f_1': 5,
'f_2_g': 6,
'h': '[{"i": 7}, {"j": 8}]'
}
```
chdb.utils.infer_data_type
{#infer-data-type}
Infers the most suitable data type for a list of values. | {"source_file": "python.md"} | [
0.02205747179687023,
0.037775833159685135,
-0.05929868668317795,
0.03330864757299423,
-0.07926852256059647,
-0.008092218078672886,
0.02557099238038063,
-0.0012594832805916667,
-0.03740572929382324,
-0.02772008627653122,
0.010015244595706463,
-0.09473288804292679,
-0.002840102184563875,
-0.... |
d438b591-b640-4fa6-b775-11cd6984dfc8 | chdb.utils.infer_data_type
{#infer-data-type}
Infers the most suitable data type for a list of values.
This function examines a list of values and determines the most appropriate
data type that can represent all the values in the list. It considers integer,
unsigned integer, decimal, and float types, and defaults to “string” if the
values cannot be represented by any numeric type or if all values are None.
Syntax
python
chdb.utils.infer_data_type(values: List[Any]) → str
Parameters
| Parameter | Type | Description |
|------------|-------------|------------------------------------------------------------|
|
values
|
List[Any]
| A list of values to analyze. The values can be of any type |
Returns
| Return Type | Description |
|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
str
| A string representing the inferred data type. Possible return values are: ”int8”, “int16”, “int32”, “int64”, “int128”, “int256”, “uint8”, “uint16”,“uint32”, “uint64”, “uint128”, “uint256”, “decimal128”, “decimal256”, “float32”, “float64”, or “string”. |
:::note
- If all values in the list are None, the function returns “string”.
- If any value in the list is a string, the function immediately returns “string”.
- The function assumes that numeric values can be represented as integers,
decimals, or floats based on their range and precision.
:::
chdb.utils.infer_data_types
{#infer-data-types}
Infers data types for each column in a columnar data structure.
This function analyzes the values in each column and infers the most suitable
data type for each column, based on a sample of the data.
Syntax
python
chdb.utils.infer_data_types`(column_data: Dict[str, List[Any]], n_rows: int = 10000) → List[tuple]
Parameters
| Parameter | Type | Default | Description |
|---------------|------------------------|------------|--------------------------------------------------------------------------------|
|
column_data
|
Dict[str, List[Any]]
|
required
| A dictionary where keys are column names and values are lists of column values |
|
n_rows
| int |
10000
| The number of rows to sample for type inference |
Returns | {"source_file": "python.md"} | [
-0.004540944937616587,
0.03399030864238739,
-0.026175059378147125,
0.02896898426115513,
-0.034985121339559555,
0.02464410476386547,
0.08570896834135056,
0.012440909631550312,
-0.06149754300713539,
-0.043074123561382294,
-0.05421649292111397,
-0.07255832105875015,
-0.00014805907267145813,
0... |
53dcce28-20e9-45a9-be68-5975e8bb2e43 | Returns
| Return Type | Description |
|---------------|----------------------------------------------------------------------------|
|
List[tuple]
| A list of tuples, each containing a column name and its inferred data type |
Abstract Base Classes {#abstract-base-classes}
class
chdb.rwabc.PyReader
(data: Any)` {#pyreader}
Bases:
ABC
python
class chdb.rwabc.PyReader(data: Any)
abstractmethod
read
{#read}
Read a specified number of rows from the given columns and return a list of objects,
where each object is a sequence of values for a column.
python
abstractmethod (col_names: List[str], count: int) → List[Any]
Parameters
| Parameter | Type | Description |
|-------------|-------------|--------------------------------|
|
col_names
|
List[str]
| List of column names to read |
|
count
| int | Maximum number of rows to read |
Returns
| Return Type | Description |
|--------------|----------------------------------------|
|
List[Any]
| List of sequences, one for each column |
class
chdb.rwabc.PyWriter
{#pywriter}
Bases:
ABC
python
class chdb.rwabc.PyWriter(col_names: List[str], types: List[type], data: Any)
abstractmethod
finalize {#finalize}
Assemble and return the final data from blocks. Must be implemented by subclasses.
python
abstractmethod finalize() → bytes
Returns
| Return Type | Description |
|--------------|---------------------------|
|
bytes
| The final serialized data |
abstractmethod
write
{#write}
Save columns of data to blocks. Must be implemented by subclasses.
python
abstractmethod write(col_names: List[str], columns: List[List[Any]]) → None
Parameters
| Parameter | Type | Description |
|-------------|-------------------|------------------------------------------------------------|
|
col_names
|
List[str]
| List of column names that are being written |
|
columns
|
List[List[Any]]
| List of columns data, each column is represented by a list |
Exception Handling {#exception-handling}
class
chdb.ChdbError
{#chdberror}
Bases:
Exception
Base exception class for chDB-related errors.
This exception is raised when chDB query execution fails or encounters
an error. It inherits from the standard Python Exception class and
provides error information from the underlying ClickHouse engine.
The exception message typically contains detailed error information
from ClickHouse, including syntax errors, type mismatches, missing
tables/columns, and other query execution issues.
Variables | {"source_file": "python.md"} | [
-0.011354572139680386,
0.04376417025923729,
-0.06392970681190491,
0.01046671625226736,
-0.05575359985232353,
-0.018899518996477127,
0.0673031136393547,
0.013845359906554222,
-0.10695018619298935,
-0.060353729873895645,
-0.03096039406955242,
-0.07357272505760193,
-0.001934194820933044,
0.01... |
b3cd795e-6535-4368-a1ae-4aea863db6b9 | The exception message typically contains detailed error information
from ClickHouse, including syntax errors, type mismatches, missing
tables/columns, and other query execution issues.
Variables
| Variable | Type | Description |
|-----------|-------|-----------------------------------------------------------------|
|
args
| - | Tuple containing the error message and any additional arguments |
Examples
```pycon
try:
... result = chdb.query("SELECT * FROM non_existent_table")
... except chdb.ChdbError as e:
... print(f"Query failed: {e}")
Query failed: Table 'non_existent_table' doesn't exist
```
```pycon
try:
... result = chdb.query("SELECT invalid_syntax FROM")
... except chdb.ChdbError as e:
... print(f"Syntax error: {e}")
Syntax error: Syntax error near 'FROM'
```
:::note
This exception is automatically raised by chdb.query() and related
functions when the underlying ClickHouse engine reports an error.
You should catch this exception when handling potentially failing
queries to provide appropriate error handling in your application.
:::
Version Information {#version-information}
chdb.chdb_version = ('3', '6', '0')
{#chdb-version}
Built-in immutable sequence.
If no argument is given, the constructor returns an empty tuple.
If iterable is specified the tuple is initialized from iterable’s items.
If the argument is a tuple, the return value is the same object.
chdb.engine_version = '25.5.2.1'
{#engine-version}
python
str(object=’’) -> str
str(bytes_or_buffer[, encoding[, errors]]) -> str
Create a new string object from the given object. If encoding or
errors is specified, then the object must expose a data buffer
that will be decoded using the given encoding and error handler.
Otherwise, returns the result of object.
_str
_() (if defined)
or repr(object).
encoding defaults to ‘utf-8’.
errors defaults to ‘strict’.
chdb.__version__ = '3.6.0'
{#version}
python
str(object=’’) -> str
str(bytes_or_buffer[, encoding[, errors]]) -> str
Create a new string object from the given object. If encoding or
errors is specified, then the object must expose a data buffer
that will be decoded using the given encoding and error handler.
Otherwise, returns the result of object.
_str
_() (if defined)
or repr(object).
encoding defaults to ‘utf-8’.
errors defaults to ‘strict’. | {"source_file": "python.md"} | [
0.03327624872326851,
-0.015308617614209652,
-0.04169879108667374,
0.0682348906993866,
-0.009703642688691616,
-0.03529352322220802,
0.007821562699973583,
0.049501627683639526,
-0.08927188068628311,
-0.03801601752638817,
0.0683378279209137,
-0.049988120794296265,
0.11376293748617172,
-0.0576... |
d6027191-7a55-4fc2-a466-b6078a1f8055 | title: 'SQL Reference'
sidebar_label: 'SQL reference'
slug: /chdb/reference/sql-reference
description: 'SQL Reference for chDB'
keywords: ['chdb', 'sql reference']
doc_type: 'reference'
chdb supports the same SQL syntax, statements, engines and functions as ClickHouse:
| Topic |
|----------------------------|
|
SQL Syntax
|
|
Statements
|
|
Table Engines
|
|
Database Engines
|
|
Regular Functions
|
|
Aggregate Functions
|
|
Table Functions
|
|
Window Functions
|
For further information and examples, see the
ClickHouse SQL Reference
. | {"source_file": "sql-reference.md"} | [
-0.005672273691743612,
-0.055848393589258194,
-0.08726225048303604,
0.06801985204219818,
-0.025384865701198578,
0.05365912988781929,
0.0799950510263443,
0.038096461445093155,
-0.05409858375787735,
-0.05541525036096573,
-0.02866632118821144,
-0.012659496627748013,
0.06458619982004166,
-0.09... |
9f8aa838-af85-438b-a9b3-c96f28a3b051 | title: 'Data Formats'
sidebar_label: 'Data formats'
slug: /chdb/reference/data-formats
description: 'Data Formats for chDB'
keywords: ['chdb', 'data formats']
doc_type: 'reference'
When it comes to data formats, chDB is 100% feature compatible with ClickHouse.
Input formats are used to parse the data provided to
INSERT
and
SELECT
from a file-backed table such as
File
,
URL
or
S3
.
Output formats are used to arrange the results of a
SELECT
, and to perform
INSERT
s into a file-backed table.
As well as the data formats that ClickHouse supports, chDB also supports:
ArrowTable
as an output format, the type is Python
pyarrow.Table
DataFrame
as an input and output format, the type is Python
pandas.DataFrame
. For examples, see
test_joindf.py
Debug
as ab output (as an alias of
CSV
), but with enabled debug verbose output from ClickHouse.
The supported data formats from ClickHouse are: | {"source_file": "data-formats.md"} | [
0.02150585502386093,
-0.08557353168725967,
-0.11822322010993958,
0.022616229951381683,
-0.023982331156730652,
0.0008586602052673697,
-0.02657501958310604,
0.011179574765264988,
-0.05627387762069702,
-0.03542749583721161,
0.00999442022293806,
0.02097783237695694,
0.04649888351559639,
-0.030... |
1d98277f-85e1-4196-a8da-dc0be0f23ed1 | | Format | Input | Output |
|---------------------------------|-------|--------|
| TabSeparated | ✔ | ✔ |
| TabSeparatedRaw | ✔ | ✔ |
| TabSeparatedWithNames | ✔ | ✔ |
| TabSeparatedWithNamesAndTypes | ✔ | ✔ |
| TabSeparatedRawWithNames | ✔ | ✔ |
| TabSeparatedRawWithNamesAndTypes| ✔ | ✔ |
| Template | ✔ | ✔ |
| TemplateIgnoreSpaces | ✔ | ✗ |
| CSV | ✔ | ✔ |
| CSVWithNames | ✔ | ✔ |
| CSVWithNamesAndTypes | ✔ | ✔ |
| CustomSeparated | ✔ | ✔ |
| CustomSeparatedWithNames | ✔ | ✔ |
| CustomSeparatedWithNamesAndTypes| ✔ | ✔ |
| SQLInsert | ✗ | ✔ |
| Values | ✔ | ✔ |
| Vertical | ✗ | ✔ |
| JSON | ✔ | ✔ |
| JSONAsString | ✔ | ✗ |
| JSONAsObject | ✔ | ✗ |
| JSONStrings | ✔ | ✔ |
| JSONColumns | ✔ | ✔ |
| JSONColumnsWithMetadata | ✔ | ✔ |
| JSONCompact | ✔ | ✔ |
| JSONCompactStrings | ✗ | ✔ |
| JSONCompactColumns | ✔ | ✔ |
| JSONEachRow | ✔ | ✔ |
| PrettyJSONEachRow | ✗ | ✔ |
| JSONEachRowWithProgress | ✗ | ✔ |
| JSONStringsEachRow | ✔ | ✔ |
| JSONStringsEachRowWithProgress | ✗ | ✔ |
| JSONCompactEachRow | ✔ | ✔ |
| JSONCompactEachRowWithNames | ✔ | ✔ |
| JSONCompactEachRowWithNamesAndTypes | ✔ | ✔ |
| JSONCompactEachRowWithProgress | ✗ | ✔ |
| JSONCompactStringsEachRow | ✔ | ✔ |
| JSONCompactStringsEachRowWithNames | ✔ | ✔ |
| JSONCompactStringsEachRowWithNamesAndTypes | ✔ | ✔ |
| JSONCompactStringsEachRowWithProgress | ✗ | ✔ |
| JSONObjectEachRow | ✔ | ✔ |
| BSONEachRow | ✔ | ✔ |
| TSKV | ✔ | ✔ |
| Pretty | ✗ | ✔ |
| PrettyNoEscapes | ✗ | ✔ |
| PrettyMonoBlock | ✗ | ✔ |
| PrettyNoEscapesMonoBlock | ✗ | ✔ |
| PrettyCompact | ✗ | ✔ |
| PrettyCompactNoEscapes | ✗ | ✔ |
| PrettyCompactMonoBlock | ✗ | ✔ |
| PrettyCompactNoEscapesMonoBlock | ✗ | ✔ |
| PrettySpace | ✗ | ✔ |
| PrettySpaceNoEscapes | ✗ | ✔ |
| PrettySpaceMonoBlock | ✗ | ✔ |
| PrettySpaceNoEscapesMonoBlock | ✗ | ✔ | | {"source_file": "data-formats.md"} | [
0.033987194299697876,
0.003845985746011138,
-0.12560728192329407,
0.06080031022429466,
-0.03857637196779251,
0.0497436597943306,
0.021435454487800598,
0.07262492179870605,
-0.04813940450549126,
0.005998415872454643,
0.02460470236837864,
-0.09924189001321793,
0.11806336045265198,
-0.1026577... |
f3f7008c-5bfa-47f4-8138-832b12b7cc3a | | PrettySpaceNoEscapes | ✗ | ✔ |
| PrettySpaceMonoBlock | ✗ | ✔ |
| PrettySpaceNoEscapesMonoBlock | ✗ | ✔ |
| Prometheus | ✗ | ✔ |
| Protobuf | ✔ | ✔ |
| ProtobufSingle | ✔ | ✔ |
| ProtobufList | ✔ | ✔ |
| Avro | ✔ | ✔ |
| AvroConfluent | ✔ | ✗ |
| Parquet | ✔ | ✔ |
| ParquetMetadata | ✔ | ✗ |
| Arrow | ✔ | ✔ |
| ArrowStream | ✔ | ✔ |
| ORC | ✔ | ✔ |
| One | ✔ | ✗ |
| Npy | ✔ | ✔ |
| RowBinary | ✔ | ✔ |
| RowBinaryWithNames | ✔ | ✔ |
| RowBinaryWithNamesAndTypes | ✔ | ✔ |
| RowBinaryWithDefaults | ✔ | ✗ |
| Native | ✔ | ✔ |
| Null | ✗ | ✔ |
| XML | ✗ | ✔ |
| CapnProto | ✔ | ✔ |
| LineAsString | ✔ | ✔ |
| Regexp | ✔ | ✗ |
| RawBLOB | ✔ | ✔ |
| MsgPack | ✔ | ✔ |
| MySQLDump | ✔ | ✗ |
| DWARF | ✔ | ✗ |
| Markdown | ✗ | ✔ |
| Form | ✔ | ✗ | | {"source_file": "data-formats.md"} | [
-0.048366837203502655,
-0.0321604385972023,
-0.059126801788806915,
-0.09385927021503448,
-0.04310258850455284,
-0.06178932636976242,
0.011923559941351414,
0.013162526302039623,
-0.0755077600479126,
0.07290013879537582,
0.034201543778181076,
-0.12440939992666245,
0.0649028867483139,
-0.0230... |
3e3a3524-5d7e-4a29-a2b4-bfb5f2ce6d8b | For further information and examples, see
ClickHouse formats for input and output data
. | {"source_file": "data-formats.md"} | [
0.01809600368142128,
-0.07165859639644623,
-0.13498957455158234,
-0.029558178037405014,
-0.04663218930363655,
-0.0037073015701025724,
-0.07141613215208054,
0.013061827048659325,
-0.062361858785152435,
-0.02899964340031147,
0.03664017841219902,
0.026538869366049767,
0.03260474279522896,
-0.... |
5ea6c569-323f-4551-8a66-738ee32c0e47 | title: 'chDB Technical Reference'
slug: /chdb/reference
description: 'Data Formats for chDB'
keywords: ['chdb', 'data formats']
doc_type: 'reference'
| Reference page |
|----------------------|
|
Data Formats
|
|
SQL Reference
| | {"source_file": "index.md"} | [
0.018618330359458923,
-0.017486589029431343,
-0.05637313053011894,
0.02409919537603855,
-0.05389309301972389,
0.06572137773036957,
0.028826765716075897,
0.040269020944833755,
-0.044060636311769485,
-0.023005466908216476,
-0.043418895453214645,
-0.03569836914539337,
0.07136798650026321,
-0.... |
5f71ca53-3584-4128-aa0f-ac25d9974faf | slug: /faq/general/ne-tormozit
title: 'What does “не тормозит” mean?'
toc_hidden: true
toc_priority: 11
description: 'This page explains what "Не тормозит" means'
keywords: ['Yandex']
doc_type: 'reference'
What does "Не тормозит" mean? {#what-does-ne-tormozit-mean}
We often get this question when people see vintage (limited production) ClickHouse t-shirts. They have the words
"ClickHouse не тормозит"
written in big bold text on the front.
Before ClickHouse became open-source, it was developed as an in-house storage system by a large European IT company,
Yandex
. That's why it initially got its slogan in Cyrillic, which is "не тормозит" (pronounced as "ne tormozit"). After the open-source release, we first produced some of those t-shirts for local events, and it was a no-brainer to use the slogan as-is.
A second batch of these t-shirts was supposed to be given away at international events, and we tried to make an English version of the slogan.
Unfortunately, we just couldn't come up with a punchy equivalent in English. The original phrase is elegant in its expression while being succinct, and restrictions on space on the t-shirt meant that we failed to come up with a good enough translation as most options appeared to be either too long or inaccurate.
We decided to keep the slogan even on t-shirts produced for international events. It appeared to be a great decision because people all over the world were positively surprised and curious when they saw it.
So, what does it mean? Here are some ways to translate
"не тормозит"
:
If you translate it literally, it sounds something like
"ClickHouse does not press the brake pedal"
.
Shorter, but less precise translations might be
"ClickHouse is not slow"
,
"ClickHouse does not lag"
or just
"ClickHouse is fast"
.
If you haven't seen one of those t-shirts in person, you can check them out online in many ClickHouse-related videos. For example, this one:
P.S. These t-shirts are not for sale
, they were given away for free at some
ClickHouse Meetups
, usually as a gift for best questions or other forms of active participation. Now, these t-shirts are no longer produced, and they have become highly valued collector's items. | {"source_file": "ne-tormozit.md"} | [
-0.06242720037698746,
0.02438083291053772,
-0.021796442568302155,
0.04016844928264618,
-0.00191282550804317,
-0.027170566841959953,
0.10152766108512878,
-0.012401202693581581,
0.03176281601190567,
-0.04410238936543465,
0.04316660761833191,
0.07331661880016327,
0.055151745676994324,
0.02921... |
ef6eed79-34f4-4f4c-a0a1-37bf446c4538 | slug: /faq/general/olap
title: 'What is OLAP?'
toc_hidden: true
toc_priority: 100
description: 'An explainer on what Online Analytical Processing is'
keywords: ['OLAP']
doc_type: 'reference'
What Is OLAP? {#what-is-olap}
OLAP
stands for Online Analytical Processing. It is a broad term that can be looked at from two perspectives: technical and business. But at the very high level, you can just read these words backward:
Processing
: Some source data is processed...
Analytical
: ...to produce some analytical reports and insights...
Online
: ...in real-time.
OLAP from the business perspective {#olap-from-the-business-perspective}
In recent years, business people started to realize the value of data. Companies who make their decisions blindly, more often than not fail to keep up with the competition. The data-driven approach of successful companies forces them to collect all data that might be remotely useful for making business decisions and need mechanisms to timely analyze them. Here's where OLAP database management systems (DBMS) come in.
In a business sense, OLAP allows companies to continuously plan, analyze, and report operational activities, thus maximizing efficiency, reducing expenses, and ultimately conquering the market share. It could be done either in an in-house system or outsourced to SaaS providers like web/mobile analytics services, CRM services, etc. OLAP is the technology behind many BI applications (Business Intelligence).
ClickHouse is an OLAP database management system that is pretty often used as a backend for those SaaS solutions for analyzing domain-specific data. However, some businesses are still reluctant to share their data with third-party providers and an in-house data warehouse scenario is also viable.
OLAP from the technical perspective {#olap-from-the-technical-perspective}
All database management systems could be classified into two groups: OLAP (Online
Analytical
Processing) and OLTP (Online
Transactional
Processing). Former focuses on building reports, each based on large volumes of historical data, but doing it not so frequently. While the latter usually handle a continuous stream of transactions, constantly modifying the current state of data.
In practice OLAP and OLTP are not categories, it's more like a spectrum. Most real systems usually focus on one of them but provide some solutions or workarounds if the opposite kind of workload is also desired. This situation often forces businesses to operate multiple storage systems integrated, which might be not so big deal but having more systems make it more expensive to maintain. So the trend of recent years is HTAP (
Hybrid Transactional/Analytical Processing
) when both kinds of the workload are handled equally well by a single database management system. | {"source_file": "olap.md"} | [
-0.04851805418729782,
-0.031645841896533966,
-0.10664249211549759,
0.05006612464785576,
0.006870245095342398,
-0.0823553130030632,
0.017669904977083206,
0.04382869973778725,
0.032937414944171906,
0.03358638286590576,
-0.0414830818772316,
0.05509742349386215,
0.07464670389890671,
-0.0412570... |
7da5a914-b619-4854-a9b3-e498cc570b88 | Even if a DBMS started as a pure OLAP or pure OLTP, they are forced to move towards that HTAP direction to keep up with their competition. And ClickHouse is no exception, initially, it has been designed as
fast-as-possible OLAP system
and it still does not have full-fledged transaction support, but some features like consistent read/writes and mutations for updating/deleting data had to be added.
The fundamental trade-off between OLAP and OLTP systems remains:
To build analytical reports efficiently it's crucial to be able to read columns separately, thus most OLAP databases are
columnar
,
While storing columns separately increases costs of operations on rows, like append or in-place modification, proportionally to the number of columns (which can be huge if the systems try to collect all details of an event just in case). Thus, most OLTP systems store data arranged by rows. | {"source_file": "olap.md"} | [
-0.0073633091524243355,
-0.043382566422224045,
-0.08077719807624817,
0.05642823502421379,
0.0077899424359202385,
-0.07394484430551529,
-0.03363001346588135,
0.03272582218050957,
0.08312829583883286,
0.04481353983283043,
0.022411318495869637,
0.11476408690214157,
0.030791740864515305,
-0.04... |
8fb7d0a2-db04-4085-8051-370f3a456a80 | title: 'What does "ClickHouse" mean?'
toc_hidden: true
toc_priority: 10
slug: /faq/general/dbms-naming
description: 'Learn about What does "ClickHouse" mean?'
doc_type: 'reference'
keywords: ['ClickHouse name', 'clickstream', 'data warehouse', 'database naming', 'ClickHouse history']
What does "ClickHouse" mean? {#what-does-clickhouse-mean}
It's a combination of "
Click
stream" and "Data ware
House
". It comes from the original use case at Yandex.Metrica, where ClickHouse was supposed to keep records of all clicks by people from all over the Internet, and it still does the job. You can read more about this use case on
ClickHouse history
page.
This two-part meaning has two consequences:
The only correct way to write Click
H
ouse is with capital H.
If you need to abbreviate it, use
CH
. For some historical reasons, abbreviating as CK is also popular in China, mostly because one of the first talks about ClickHouse in Chinese used this form.
:::info
Many years after ClickHouse got its name, this approach of combining two words that are meaningful on their own has been highlighted as the best way to name a database in a
research by Andy Pavlo
, an Associate Professor of Databases at Carnegie Mellon University. ClickHouse shared his "best database name of all time" award with Postgres.
::: | {"source_file": "dbms-naming.md"} | [
-0.08117790520191193,
-0.09317956119775772,
-0.01677660457789898,
0.014075486920773983,
-0.024887079373002052,
-0.01308247447013855,
0.10096081346273422,
-0.05684325844049454,
-0.010165859945118427,
-0.03595297411084175,
0.06761106103658676,
-0.013341316021978855,
0.05148584395647049,
-0.0... |
31e0abfc-5ec2-4964-8656-62e99611b32f | slug: /faq/general/who-is-using-clickhouse
title: 'Who is using ClickHouse?'
toc_hidden: true
toc_priority: 9
description: 'Describes who is using ClickHouse'
keywords: ['customer']
doc_type: 'reference'
Who is using ClickHouse? {#who-is-using-clickhouse}
Being an open-source product makes this question not so straightforward to answer. You do not have to tell anyone if you want to start using ClickHouse, you just go grab source code or pre-compiled packages. There's no contract to sign and the
Apache 2.0 license
allows for unconstrained software distribution.
Also, the technology stack is often in a grey zone of what's covered by an NDA. Some companies consider technologies they use as a competitive advantage even if they are open-source and do not allow employees to share any details publicly. Some see some PR risks and allow employees to share implementation details only with their PR department approval.
So how to tell who is using ClickHouse?
One way is to
ask around
. If it's not in writing, people are much more willing to share what technologies are used in their companies, what the use cases are, what kind of hardware is used, data volumes, etc. We're talking with users regularly on
ClickHouse Meetups
all over the world and have heard stories about 1000+ companies that use ClickHouse. Unfortunately, that's not reproducible and we try to treat such stories as if they were told under NDA to avoid any potential troubles. But you can come to any of our future meetups and talk with other users on your own. There are multiple ways how meetups are announced, for example, you can subscribe to
our Twitter
.
The second way is to look for companies
publicly saying
that they use ClickHouse. It's more substantial because there's usually some hard evidence like a blog post, talk video recording, slide deck, etc. We collect the collection of links to such evidence on our
Adopters
page. Feel free to contribute the story of your employer or just some links you've stumbled upon (but try not to violate your NDA in the process).
You can find names of very large companies in the adopters list, like Bloomberg, Cisco, China Telecom, Tencent, or Lyft, but with the first approach, we found that there are many more. For example, if you take
the list of largest IT companies by Forbes (2020)
over half of them are using ClickHouse in some way. Also, it would be unfair not to mention
Yandex
, the company which initially open-sourced ClickHouse in 2016 and happens to be one of the largest IT companies in Europe. | {"source_file": "who-is-using-clickhouse.md"} | [
0.0006289260927587748,
-0.06609072536230087,
-0.053013838827610016,
-0.07356775552034378,
0.02052239142358303,
-0.03512350842356682,
0.025204427540302277,
-0.021698880940675735,
-0.024837570264935493,
-0.06498391181230545,
0.03195766732096672,
-0.019225193187594414,
-0.046503689140081406,
... |
df5e9ab2-7b0b-434c-9d63-cc7d9836d5c4 | slug: /faq/general/
sidebar_position: 1
sidebar_label: 'General questions about ClickHouse'
keywords: ['faq', 'questions', 'what is']
title: 'General Questions About ClickHouse'
description: 'Index page listing general questions about ClickHouse'
doc_type: 'landing-page'
General questions about ClickHouse
What is ClickHouse?
Why is ClickHouse so fast?
Who is using ClickHouse?
What does "ClickHouse" mean?
What does "Не тормозит" mean?
What is OLAP?
What is a columnar database?
How do I choose a primary key?
Why not use something like MapReduce?
How do I contribute code to ClickHouse?
:::info Don't see what you're looking for?
Check out our
Knowledge Base
and also browse the many helpful articles found here in the documentation.
::: | {"source_file": "index.md"} | [
0.02651510015130043,
-0.05556318536400795,
-0.06946249306201935,
0.023175744339823723,
-0.025104666128754616,
-0.03986387327313423,
0.006443173158913851,
0.024649299681186676,
-0.09637568145990372,
-0.003722512861713767,
0.02507210709154606,
0.02912253700196743,
0.034988295286893845,
-0.04... |
3523696a-5d16-44f3-b238-a802d28ba33c | slug: /faq/general/mapreduce
title: 'Why not use something like MapReduce?'
toc_hidden: true
toc_priority: 110
description: 'This page explains why you would use ClickHouse over MapReduce'
keywords: ['MapReduce']
doc_type: 'reference'
Why not use something like MapReduce? {#why-not-use-something-like-mapreduce}
We can refer to systems like MapReduce as distributed computing systems in which the reduce operation is based on distributed sorting. The most common open-source solution in this class is
Apache Hadoop
.
These systems aren't appropriate for online queries due to their high latency. In other words, they can't be used as the back-end for a web interface. These types of systems aren't useful for real-time data updates. Distributed sorting isn't the best way to perform reduce operations if the result of the operation and all the intermediate results (if there are any) are located in the RAM of a single server, which is usually the case for online queries. In such a case, a hash table is an optimal way to perform reduce operations. A common approach to optimizing map-reduce tasks is pre-aggregation (partial reduce) using a hash table in RAM. The user performs this optimization manually. Distributed sorting is one of the main causes of reduced performance when running simple map-reduce tasks.
Most MapReduce implementations allow you to execute arbitrary code on a cluster. But a declarative query language is better suited to OLAP to run experiments quickly. For example, Hadoop has Hive and Pig. Also consider Cloudera Impala or Shark (outdated) for Spark, as well as Spark SQL, Presto, and Apache Drill. Performance when running such tasks is highly sub-optimal compared to specialized systems, but relatively high latency makes it unrealistic to use these systems as the backend for a web interface. | {"source_file": "mapreduce.md"} | [
-0.01488339714705944,
-0.00845594983547926,
-0.021375609561800957,
-0.022842006757855415,
-0.06262435019016266,
-0.013167946599423885,
-0.04211772605776787,
0.0636967346072197,
-0.01418220903724432,
0.10654915869235992,
-0.017574798315763474,
0.09419544041156769,
0.046477098017930984,
-0.0... |
430f1120-dfda-4153-9a62-f1dbd518876a | slug: /faq/general/columnar-database
title: 'What is a columnar database?'
toc_hidden: true
toc_priority: 101
description: 'This page describes what a columnar database is'
keywords: ['columnar database', 'column-oriented database', 'OLAP database', 'analytical database', 'data warehousing']
doc_type: 'reference'
import Image from '@theme/IdealImage';
import RowOriented from '@site/static/images/row-oriented.gif';
import ColumnOriented from '@site/static/images/column-oriented.gif';
What is a columnar database? {#what-is-a-columnar-database}
A columnar database stores the data of each column independently. This allows reading data from disk only for those columns that are used in any given query. The cost is that operations that affect whole rows become proportionally more expensive. The synonym for a columnar database is a column-oriented database management system. ClickHouse is a typical example of such a system.
Key columnar database advantages are:
Queries that use only a few columns out of many.
Aggregating queries against large volumes of data.
Column-wise data compression.
Here is the illustration of the difference between traditional row-oriented systems and columnar databases when building reports:
Traditional row-oriented
Columnar
A columnar database is the preferred choice for analytical applications because it allows having many columns in a table just in case, but to not pay the cost for unused columns on read query execution time (a traditional OLTP database reads all of the data during queries as the data is stored in rows and not columns). Column-oriented databases are designed for big data processing and data warehousing, they often natively scale using distributed clusters of low-cost hardware to increase throughput. ClickHouse does it with combination of
distributed
and
replicated
tables.
If you'd like a deep dive into the history of column databases, how they differ from row-oriented databases, and the use cases for a column database, see
the column databases guide
. | {"source_file": "columnar-database.md"} | [
-0.04870033264160156,
-0.050083812326192856,
-0.17744074761867523,
0.05126519501209259,
0.008260396309196949,
-0.0964447557926178,
0.004743673838675022,
0.024467511102557182,
0.018536152318120003,
0.08319102227687836,
0.01077863946557045,
0.09072602540254593,
0.07738907635211945,
-0.068213... |
b5ab9d13-57fd-4335-be18-c767788ffb15 | slug: /faq/integration/
sidebar_position: 1
sidebar_label: 'Integrating ClickHouse with other systems'
keywords: ['clickhouse', 'faq', 'questions', 'integrations']
title: 'Questions about integrating ClickHouse and other systems'
description: 'Landing page listing questions related to integrating ClickHouse with other systems'
doc_type: 'landing-page'
Questions about integrating ClickHouse and other systems
How do I export data from ClickHouse to a file?
How to import JSON into ClickHouse?
How do I connect Kafka to ClickHouse?
Can I connect my Java application to ClickHouse?
Can ClickHouse read tables from MySQL?
Can ClickHouse read tables from PostgreSQL
What if I have a problem with encodings when connecting to Oracle via ODBC?
:::info Don't see what you're looking for?
Check out our
Knowledge Base
and also browse the many helpful articles found here in the documentation.
::: | {"source_file": "index.md"} | [
-0.00009110523387789726,
-0.08455648273229599,
-0.1227845624089241,
-0.014573675580322742,
-0.05692604184150696,
-0.05327505245804787,
-0.003769313683733344,
0.01646575890481472,
-0.10244423896074295,
-0.04754114896059036,
0.022193333134055138,
-0.0002921807754319161,
0.06238355115056038,
... |
7ff91d1f-a4cc-40ee-816e-2a3d6d3f3899 | slug: /faq/integration/oracle-odbc
title: 'What if I have a problem with encodings when using Oracle via ODBC?'
toc_hidden: true
toc_priority: 20
description: 'This page provides guidance on what to do if you have a problem with encodings when using Oracle via ODBC'
doc_type: 'guide'
keywords: ['oracle', 'odbc', 'encoding', 'integration', 'external dictionary']
What if I have a problem with encodings when using Oracle via ODBC? {#oracle-odbc-encodings}
If you use Oracle as a source of ClickHouse external dictionaries via Oracle ODBC driver, you need to set the correct value for the
NLS_LANG
environment variable in
/etc/default/clickhouse
. For more information, see the
Oracle NLS_LANG FAQ
.
Example
sql
NLS_LANG=RUSSIAN_RUSSIA.UTF8 | {"source_file": "oracle-odbc.md"} | [
-0.017792247235774994,
-0.11017797142267227,
-0.08544784039258957,
0.005684586241841316,
-0.019468313083052635,
-0.05249840393662453,
0.053024470806121826,
0.022897901013493538,
-0.022058412432670593,
-0.007170130033046007,
-0.012220659293234348,
-0.03746657073497772,
-0.021805360913276672,
... |
b7cad2d1-9116-416d-a892-87b7c55cad43 | slug: /faq/integration/json-import
title: 'How to import JSON into ClickHouse?'
toc_hidden: true
toc_priority: 11
description: 'This page shows you how to import JSON into ClickHouse'
keywords: ['JSON import', 'JSONEachRow format', 'data import', 'JSON ingestion', 'data formats']
doc_type: 'guide'
How to Import JSON Into ClickHouse? {#how-to-import-json-into-clickhouse}
ClickHouse supports a wide range of
data formats for input and output
. There are multiple JSON variations among them, but the most commonly used for data ingestion is
JSONEachRow
. It expects one JSON object per row, each object separated by a newline.
Examples {#examples}
Using
HTTP interface
:
bash
$ echo '{"foo":"bar"}' | curl 'http://localhost:8123/?query=INSERT%20INTO%20test%20FORMAT%20JSONEachRow' --data-binary @-
Using
CLI interface
:
bash
$ echo '{"foo":"bar"}' | clickhouse-client --query="INSERT INTO test FORMAT JSONEachRow"
Instead of inserting data manually, you might consider to use an
integration tool
instead.
Useful settings {#useful-settings}
input_format_skip_unknown_fields
allows to insert JSON even if there were additional fields not present in table schema (by discarding them).
input_format_import_nested_json
allows to insert nested JSON objects into columns of
Nested
type.
:::note
Settings are specified as
GET
parameters for the HTTP interface or as additional command-line arguments prefixed with
--
for the
CLI
interface.
::: | {"source_file": "json-import.md"} | [
-0.048550888895988464,
-0.04838163033127785,
-0.0681261196732521,
0.016465019434690475,
-0.04606111720204353,
-0.009922546334564686,
-0.041654109954833984,
0.005743261426687241,
-0.0726374089717865,
-0.017449766397476196,
0.021364949643611908,
-0.03177807480096817,
0.0467008501291275,
0.03... |
4ce28e19-d44a-4b70-ab20-41f7c52ea6cf | slug: /faq/operations/production
title: 'Which ClickHouse version to use in production?'
toc_hidden: true
toc_priority: 10
description: 'This page provides guidance on which ClickHouse version to use in production'
doc_type: 'guide'
keywords: ['production', 'deployment', 'versions', 'best practices', 'upgrade strategy']
Which ClickHouse version to use in production? {#which-clickhouse-version-to-use-in-production}
First of all, let's discuss why people ask this question in the first place. There are two key reasons:
ClickHouse is developed with pretty high velocity, and usually there are 10+ stable releases per year. That makes a wide range of releases to choose from, which is not so trivial of a choice.
Some users want to avoid spending time figuring out which version works best for their use case and just follow someone else's advice.
The second reason is more fundamental, so we'll start with that one and then get back to navigating through various ClickHouse releases.
Which ClickHouse version do you recommend? {#which-clickhouse-version-do-you-recommend}
It's tempting to hire consultants or trust some known experts to get rid of responsibility for your production environment. You install some specific ClickHouse version that someone else recommended; if there's some issue with it - it's not your fault, it's someone else's. This line of reasoning is a big trap. No external person knows better than you what's going on in your company's production environment.
So how do you properly choose which ClickHouse version to upgrade to? Or how do you choose your first ClickHouse version? First of all, you need to invest in setting up a
realistic pre-production environment
. In an ideal world, it could be a completely identical shadow copy, but that's usually expensive.
Here are some key points to get reasonable fidelity in a pre-production environment with not-so-high costs:
Pre-production environment needs to run an as close of a set of queries as you intend to run in production:
Don't make it read-only with some frozen data.
Don't make it write-only with just copying data without building some typical reports.
Don't wipe it clean instead of applying schema migrations.
Use a sample of real production data and queries. Try to choose a sample that's still representative and makes
SELECT
queries return reasonable results. Use obfuscation if your data is sensitive and internal policies do not allow it to leave the production environment.
Make sure that pre-production is covered by your monitoring and alerting software the same way as your production environment does.
If your production spans across multiple datacenters or regions, make your pre-production do the same.
If your production uses complex features like replication, distributed tables and cascading materialized views, make sure they are configured similarly in pre-production. | {"source_file": "production.md"} | [
-0.015320926904678345,
-0.06434056162834167,
0.05174433812499046,
-0.030948173254728317,
0.09656595438718796,
-0.007259075529873371,
-0.041700612753629684,
-0.01444858405739069,
-0.02047143131494522,
0.010526569560170174,
0.005299500655382872,
0.0651126280426979,
-0.07869589328765869,
0.02... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.