id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
50bd0c49-a045-47f7-b9cc-754c3257e883 | slug: '/examples/aggregate-function-combinators/groupArrayResample'
title: 'groupArrayResample'
description: 'Example of using the Resample combinator with groupArray'
keywords: ['groupArray', 'Resample', 'combinator', 'examples', 'groupArrayResample']
sidebar_label: 'groupArrayResample'
doc_type: 'reference'
groupArrayResample {#grouparrayresample}
Description {#description}
The
Resample
combinator can be applied to the
groupArray
aggregate function to
divide the range of a specified key column into a fixed number of intervals (
N
)
and construct the resulting array by selecting one representative value
(corresponding to the minimum key) from the data points falling into each interval.
It creates a downsampled view of the data rather than collecting all values.
Example usage {#example-usage}
Let's look at an example. We'll create a table which contains the
name
,
age
and
wage
of employees, and we'll insert some data into it:
```sql
CREATE TABLE employee_data
(
name String,
age UInt8,
wage Float32
) ENGINE = MergeTree()
ORDER BY tuple()
INSERT INTO employee_data (name, age, wage) VALUES
('John', 16, 10.0),
('Alice', 30, 15.0),
('Mary', 35, 8.0),
('Evelyn', 48, 11.5),
('David', 62, 9.9),
('Brian', 60, 16.0);
```
Let's get the names of the people whose age lies in the intervals of
[30,60)
and
[60,75)
. Since we use integer representation for age, we get ages in the
[30, 59]
and
[60,74]
intervals.
To aggregate names in an array, we use the
groupArray
aggregate function.
It takes one argument. In our case, it's the name column. The
groupArrayResample
function should use the age column to aggregate names by age. To define the
required intervals, we pass
30
,
75
,
30
as arguments into the
groupArrayResample
function:
sql
SELECT groupArrayResample(30, 75, 30)(name, age) FROM employee_data
response
ββgroupArrayResample(30, 75, 30)(name, age)ββββββ
β [['Alice','Mary','Evelyn'],['David','Brian']] β
βββββββββββββββββββββββββββββββββββββββββββββββββ
See also {#see-also}
groupArray
Resample combinator | {"source_file": "groupArrayResample.md"} | [
-0.0334571935236454,
0.04147496074438095,
-0.01857909746468067,
0.009065435267984867,
-0.08038917183876038,
0.01637049950659275,
0.03394204005599022,
0.026446767151355743,
-0.042450107634067535,
-0.032860420644283295,
-0.03660868853330612,
-0.004881930537521839,
0.08130308240652084,
-0.048... |
534580a0-1039-458c-90ca-e551ec53fad8 | slug: '/examples/aggregate-function-combinators/avgIf'
title: 'avgIf'
description: 'Example of using the avgIf combinator'
keywords: ['avg', 'if', 'combinator', 'examples', 'avgIf']
sidebar_label: 'avgIf'
doc_type: 'reference'
avgIf {#avgif}
Description {#description}
The
If
combinator can be applied to the
avg
function to calculate the arithmetic mean of values for rows where the condition is true,
using the
avgIf
aggregate combinator function.
Example usage {#example-usage}
In this example, we'll create a table that stores sales data with success flags,
and we'll use
avgIf
to calculate the average sale amount for successful transactions.
```sql title="Query"
CREATE TABLE sales(
transaction_id UInt32,
amount Decimal(10,2),
is_successful UInt8
) ENGINE = Log;
INSERT INTO sales VALUES
(1, 100.50, 1),
(2, 200.75, 1),
(3, 150.25, 0),
(4, 300.00, 1),
(5, 250.50, 0),
(6, 175.25, 1);
SELECT
avgIf(amount, is_successful = 1) AS avg_successful_sale
FROM sales;
```
The
avgIf
function will calculate the average amount only for rows where
is_successful = 1
.
In this case, it will average the amounts: 100.50, 200.75, 300.00, and 175.25.
response title="Response"
ββavg_successful_saleββ
1. β 193.88 β
βββββββββββββββββββββββ
See also {#see-also}
avg
If combinator | {"source_file": "avgIf.md"} | [
-0.030103694647550583,
0.015890251845121384,
-0.03387553244829178,
0.016217177733778954,
-0.12766960263252258,
0.00312243914231658,
0.12300441414117813,
0.08191534876823425,
0.0541369691491127,
0.0342235341668129,
-0.0055503943003714085,
-0.049543190747499466,
0.04380548745393753,
-0.03589... |
a6985191-a8c3-463e-b446-690be20e2c8c | slug: '/examples/aggregate-function-combinators/maxSimpleState'
title: 'maxSimpleState'
description: 'Example of using the maxSimpleState combinator'
keywords: ['max', 'state', 'simple', 'combinator', 'examples', 'maxSimpleState']
sidebar_label: 'maxSimpleState'
doc_type: 'reference'
maxSimpleState {#maxsimplestate}
Description {#description}
The
SimpleState
combinator can be applied to the
max
function to return the maximum value across all input values. It returns the
result with type
SimpleAggregateState
.
Example usage {#example-usage}
The example given in
minSimpleState
demonstrates a usage of both
maxSimpleState
and
minSimpleState
.
See also {#see-also}
max
SimpleState combinator
SimpleAggregateFunction type | {"source_file": "maxSimpleState.md"} | [
-0.011268286034464836,
-0.0032782775815576315,
0.044117555022239685,
0.005002915393561125,
-0.08222074061632156,
0.03653719276189804,
-0.02008495293557644,
0.12731194496154785,
-0.05682089552283287,
-0.02614896558225155,
-0.035098496824502945,
-0.004363690037280321,
0.07261857390403748,
-0... |
ba5455c6-bccb-4427-b6b1-3ab3aa6d4660 | slug: '/examples/aggregate-function-combinators/argMinIf'
title: 'argMinIf'
description: 'Example of using the argMinIf combinator'
keywords: ['argMin', 'if', 'combinator', 'examples', 'argMinIf']
sidebar_label: 'argMinIf'
doc_type: 'reference'
argMinIf {#argminif}
Description {#description}
The
If
combinator can be applied to the
argMin
function to find the value of
arg
that corresponds to the minimum value of
val
for rows where the condition is true,
using the
argMinIf
aggregate combinator function.
The
argMinIf
function is useful when you need to find the value associated
with the minimum value in a dataset, but only for rows that satisfy a specific
condition.
Example usage {#example-usage}
In this example, we'll create a table that stores product prices and their timestamps,
and we'll use
argMinIf
to find the lowest price for each product when it's in stock.
```sql title="Query"
CREATE TABLE product_prices(
product_id UInt32,
price Decimal(10,2),
timestamp DateTime,
in_stock UInt8
) ENGINE = Log;
INSERT INTO product_prices VALUES
(1, 10.99, '2024-01-01 10:00:00', 1),
(1, 9.99, '2024-01-01 10:05:00', 1),
(1, 11.99, '2024-01-01 10:10:00', 0),
(2, 20.99, '2024-01-01 11:00:00', 1),
(2, 19.99, '2024-01-01 11:05:00', 1),
(2, 21.99, '2024-01-01 11:10:00', 1);
SELECT
product_id,
argMinIf(price, timestamp, in_stock = 1) AS lowest_price_when_in_stock
FROM product_prices
GROUP BY product_id;
```
The
argMinIf
function will find the price that corresponds to the earliest timestamp for each product,
but only considering rows where
in_stock = 1
. For example:
- Product 1: Among in-stock rows, 10.99 has the earliest timestamp (10:00:00)
- Product 2: Among in-stock rows, 20.99 has the earliest timestamp (11:00:00)
response title="Response"
ββproduct_idββ¬βlowest_price_when_in_stockββ
1. β 1 β 10.99 β
2. β 2 β 20.99 β
ββββββββββββββ΄βββββββββββββββββββββββββββββ
See also {#see-also}
argMin
argMax
argMaxIf
If combinator | {"source_file": "argMinIf.md"} | [
-0.056487321853637695,
-0.001465103356167674,
-0.03509093448519707,
0.05072447657585144,
-0.0520755909383297,
0.0234078299254179,
0.16892853379249573,
0.07911623269319534,
-0.017845233902335167,
0.000824799994006753,
0.018848802894353867,
-0.04753480479121208,
0.09146735072135925,
-0.03601... |
0c1bcda7-2b7e-4cc1-a978-0ce5308f8099 | slug: '/examples/aggregate-function-combinators/minMap'
title: 'minMap'
description: 'Example of using the minMap combinator'
keywords: ['min', 'map', 'combinator', 'examples', 'minMap']
sidebar_label: 'minMap'
doc_type: 'reference'
minMap {#minmap}
Description {#description}
The
Map
combinator can be applied to the
min
function to calculate the minimum value in a Map according to each key, using the
minMap
aggregate combinator function.
Example usage {#example-usage}
In this example, we'll create a table that stores status codes and their counts for different timeslots,
where each row contains a Map of status codes to their corresponding counts. We'll use
minMap
to find the minimum count for each status code within each timeslot.
```sql title="Query"
CREATE TABLE metrics(
date Date,
timeslot DateTime,
status Map(String, UInt64)
) ENGINE = Log;
INSERT INTO metrics VALUES
('2000-01-01', '2000-01-01 00:00:00', (['a', 'b', 'c'], [15, 25, 35])),
('2000-01-01', '2000-01-01 00:00:00', (['c', 'd', 'e'], [45, 55, 65])),
('2000-01-01', '2000-01-01 00:01:00', (['d', 'e', 'f'], [75, 85, 95])),
('2000-01-01', '2000-01-01 00:01:00', (['f', 'g', 'g'], [105, 115, 125]));
SELECT
timeslot,
minMap(status),
FROM metrics
GROUP BY timeslot;
```
The
minMap
function will find the minimum count for each status code within each timeslot. For example:
- In timeslot '2000-01-01 00:00:00':
- Status 'a': 15
- Status 'b': 25
- Status 'c': min(35, 45) = 35
- Status 'd': 55
- Status 'e': 65
- In timeslot '2000-01-01 00:01:00':
- Status 'd': 75
- Status 'e': 85
- Status 'f': min(95, 105) = 95
- Status 'g': min(115, 125) = 115
response title="Response"
βββββββββββββtimeslotββ¬βminMap(status)ββββββββββββββββββββββββ
1. β 2000-01-01 00:01:00 β {'d':75,'e':85,'f':95,'g':115} β
2. β 2000-01-01 00:00:00 β {'a':15,'b':25,'c':35,'d':55,'e':65} β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ
See also {#see-also}
min
Map combinator | {"source_file": "minMap.md"} | [
0.020174294710159302,
0.042638204991817474,
0.05391319468617439,
0.026234935969114304,
-0.10670614242553711,
0.042676474899053574,
0.04383949935436249,
0.10521887987852097,
-0.00019162491662427783,
0.0318739078938961,
0.04663796350359917,
-0.08972744643688202,
0.09506341814994812,
-0.02497... |
bb6cc0bf-d604-4f5f-8d7e-c3843a4e79ef | slug: '/examples/aggregate-function-combinators/countResample'
title: 'countResample'
description: 'Example of using the Resample combinator with count'
keywords: ['count', 'Resample', 'combinator', 'examples', 'countResample']
sidebar_label: 'countResample'
doc_type: 'reference'
countResample {#countResample}
Description {#description}
The
Resample
combinator can be applied to the
count
aggregate function to count values of a specified key column in a fixed number
of intervals (
N
).
Example usage {#example-usage}
Basic example {#basic-example}
Let's look at an example. We'll create a table which contains the
name
,
age
and
wage
of employees, and we'll insert some data into it:
```sql
CREATE TABLE employee_data
(
name String,
age UInt8,
wage Float32
)
ENGINE = MergeTree()
ORDER BY tuple()
INSERT INTO employee_data (name, age, wage) VALUES
('John', 16, 10.0),
('Alice', 30, 15.0),
('Mary', 35, 8.0),
('Evelyn', 48, 11.5),
('David', 62, 9.9),
('Brian', 60, 16.0);
```
Let's count all the people whose age lies in the intervals of
[30,60)
and
[60,75)
. Since we use integer representation for age, we get ages in the
[30, 59]
and
[60,74]
intervals. To do so we apply the
Resample
combinator
to
count
sql
SELECT countResample(30, 75, 30)(name, age) AS amount FROM employee_data
response
ββamountββ
β [3,2] β
ββββββββββ
See also {#see-also}
count
Resample combinator | {"source_file": "countResample.md"} | [
-0.03934033215045929,
-0.008353724144399166,
0.03559162840247154,
0.034515731036663055,
-0.10935699194669724,
0.014102143235504627,
0.006318810861557722,
0.08667366951704025,
-0.021645573899149895,
-0.007501059677451849,
0.0010822853073477745,
-0.03273065388202667,
0.08157810568809509,
-0.... |
812bafb3-6716-4864-8e5d-db23b6ad69e6 | slug: '/examples/aggregate-function-combinators/argMaxIf'
title: 'argMaxIf'
description: 'Example of using the argMaxIf combinator'
keywords: ['argMax', 'if', 'combinator', 'examples', 'argMaxIf']
sidebar_label: 'argMaxIf'
doc_type: 'reference'
argMaxIf {#argmaxif}
Description {#description}
The
If
combinator can be applied to the
argMax
function to find the value of
arg
that corresponds to the maximum value of
val
for rows where the condition is true,
using the
argMaxIf
aggregate combinator function.
The
argMaxIf
function is useful when you need to find the value associated with
the maximum value in a dataset, but only for rows that satisfy a specific
condition.
Example usage {#example-usage}
In this example, we'll use a sample dataset of product sales to demonstrate how
argMaxIf
works. We'll find the product name that has the highest price, but
only for products that have been sold at least 10 times.
```sql title="Query"
CREATE TABLE product_sales
(
product_name String,
price Decimal32(2),
sales_count UInt32
) ENGINE = Memory;
INSERT INTO product_sales VALUES
('Laptop', 999.99, 10),
('Phone', 499.99, 15),
('Tablet', 299.99, 0),
('Watch', 1199.99, 5),
('Headphones', 79.99, 20);
SELECT argMaxIf(product_name, price, sales_count >= 10) AS most_expensive_popular_product
FROM product_sales;
```
The
argMaxIf
function will return the product name that has the highest price
among all products that have been sold at least 10 times (sales_count >= 10).
In this case, it will return 'Laptop' since it has the highest price (999.99)
among the popular products.
response title="Response"
ββmost_expensiβ―lar_productββ
1. β Laptop β
ββββββββββββββββββββββββββββ
See also {#see-also}
argMax
argMin
argMinIf
If combinator | {"source_file": "argMaxIf.md"} | [
-0.05963308736681938,
-0.01744123548269272,
-0.02043168433010578,
-0.02475057542324066,
-0.04161219671368599,
0.036799993366003036,
0.1103224903345108,
0.0854041576385498,
-0.040516480803489685,
-0.016757601872086525,
0.007647468242794275,
0.007460466120392084,
0.102023184299469,
-0.011643... |
a3c73093-47d7-453f-8f83-ea6e27737080 | slug: '/examples/aggregate-function-combinators/avgMergeState'
title: 'avgMergeState'
description: 'Example of using the avgMergeState combinator'
keywords: ['avg', 'MergeState', 'combinator', 'examples', 'avgMergeState']
sidebar_label: 'avgMergeState'
doc_type: 'reference'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
avgMergeState {#avgMergeState}
Description {#description}
The
MergeState
combinator
can be applied to the
avg
function to merge partial aggregate states of type
AverageFunction(avg, T)
and
return a new intermediate aggregation state.
Example usage {#example-usage}
The
MergeState
combinator is particularly useful for multi-level aggregation
scenarios where you want to combine pre-aggregated states and maintain them as
states (rather than finalizing them) for further processing. To illustrate, we'll
look at an example in which we transform individual server performance metrics
into hierarchical aggregations across multiple levels: Server level β Region level
β Datacenter level.
First we create a table to store the raw data:
sql
CREATE TABLE raw_server_metrics
(
timestamp DateTime DEFAULT now(),
server_id UInt32,
region String,
datacenter String,
response_time_ms UInt32
)
ENGINE = MergeTree()
ORDER BY (region, server_id, timestamp);
We'll create a server-level aggregation target table and define an Incremental
materialized view acting as an insert trigger to it:
```sql
CREATE TABLE server_performance
(
server_id UInt32,
region String,
datacenter String,
avg_response_time AggregateFunction(avg, UInt32)
)
ENGINE = AggregatingMergeTree()
ORDER BY (region, server_id);
CREATE MATERIALIZED VIEW server_performance_mv
TO server_performance
AS SELECT
server_id,
region,
datacenter,
avgState(response_time_ms) AS avg_response_time
FROM raw_server_metrics
GROUP BY server_id, region, datacenter;
```
We'll do the same for the regional and datacenter levels:
```sql
CREATE TABLE region_performance
(
region String,
datacenter String,
avg_response_time AggregateFunction(avg, UInt32)
)
ENGINE = AggregatingMergeTree()
ORDER BY (datacenter, region);
CREATE MATERIALIZED VIEW region_performance_mv
TO region_performance
AS SELECT
region,
datacenter,
avgMergeState(avg_response_time) AS avg_response_time
FROM server_performance
GROUP BY region, datacenter;
-- datacenter level table and materialized view
CREATE TABLE datacenter_performance
(
datacenter String,
avg_response_time AggregateFunction(avg, UInt32)
)
ENGINE = AggregatingMergeTree()
ORDER BY datacenter;
CREATE MATERIALIZED VIEW datacenter_performance_mv
TO datacenter_performance
AS SELECT
datacenter,
avgMergeState(avg_response_time) AS avg_response_time
FROM region_performance
GROUP BY datacenter;
```
We'll then insert sample raw data into the source table: | {"source_file": "avgMergeState.md"} | [
-0.05490066483616829,
-0.02016613259911537,
0.03653664514422417,
0.06451437622308731,
-0.037277646362781525,
0.028151659294962883,
0.08036521077156067,
0.057375211268663406,
-0.007291099056601524,
0.005956313572824001,
-0.06824959069490433,
-0.05912485718727112,
0.01399574801325798,
0.0124... |
54c50d0c-9b40-4e32-9921-e90bb8fef328 | We'll then insert sample raw data into the source table:
sql
INSERT INTO raw_server_metrics (timestamp, server_id, region, datacenter, response_time_ms) VALUES
(now(), 101, 'us-east', 'dc1', 120),
(now(), 101, 'us-east', 'dc1', 130),
(now(), 102, 'us-east', 'dc1', 115),
(now(), 201, 'us-west', 'dc1', 95),
(now(), 202, 'us-west', 'dc1', 105),
(now(), 301, 'eu-central', 'dc2', 145),
(now(), 302, 'eu-central', 'dc2', 155);
We'll write three queries for each of the levels:
sql
SELECT
server_id,
region,
avgMerge(avg_response_time) AS avg_response_ms
FROM server_performance
GROUP BY server_id, region
ORDER BY region, server_id;
response
ββserver_idββ¬βregionββββββ¬βavg_response_msββ
β 301 β eu-central β 145 β
β 302 β eu-central β 155 β
β 101 β us-east β 125 β
β 102 β us-east β 115 β
β 201 β us-west β 95 β
β 202 β us-west β 105 β
βββββββββββββ΄βββββββββββββ΄ββββββββββββββββββ
sql
SELECT
region,
datacenter,
avgMerge(avg_response_time) AS avg_response_ms
FROM region_performance
GROUP BY region, datacenter
ORDER BY datacenter, region;
response
ββregionββββββ¬βdatacenterββ¬ββββavg_response_msββ
β us-east β dc1 β 121.66666666666667 β
β us-west β dc1 β 100 β
β eu-central β dc2 β 150 β
ββββββββββββββ΄βββββββββββββ΄βββββββββββββββββββββ
sql
SELECT
datacenter,
avgMerge(avg_response_time) AS avg_response_ms
FROM datacenter_performance
GROUP BY datacenter
ORDER BY datacenter;
response
ββdatacenterββ¬βavg_response_msββ
β dc1 β 113 β
β dc2 β 150 β
ββββββββββββββ΄ββββββββββββββββββ
We can insert more data:
sql
INSERT INTO raw_server_metrics (timestamp, server_id, region, datacenter, response_time_ms) VALUES
(now(), 101, 'us-east', 'dc1', 140),
(now(), 201, 'us-west', 'dc1', 85),
(now(), 301, 'eu-central', 'dc2', 135);
Let's check the datacenter-level performance again. Notice how the entire
aggregation chain updated automatically:
sql
SELECT
datacenter,
avgMerge(avg_response_time) AS avg_response_ms
FROM datacenter_performance
GROUP BY datacenter
ORDER BY datacenter;
response
ββdatacenterββ¬ββββavg_response_msββ
β dc1 β 112.85714285714286 β
β dc2 β 145 β
ββββββββββββββ΄βββββββββββββββββββββ
See also {#see-also}
avg
AggregateFunction
Merge
MergeState | {"source_file": "avgMergeState.md"} | [
0.050523653626441956,
-0.022808140143752098,
-0.012817883864045143,
0.060879990458488464,
-0.06284798681735992,
-0.04930058866739273,
-0.01615188829600811,
-0.0022085628006607294,
0.007332769688218832,
0.05298518389463425,
0.003275175578892231,
-0.15796497464179993,
0.04420539364218712,
-0... |
4db51e18-8437-49d5-9e82-8765251b92da | slug: '/examples/aggregate-function-combinators/sumMap'
title: 'sumMap'
description: 'Example of using the sumMap combinator'
keywords: ['sum', 'map', 'combinator', 'examples', 'sumMap']
sidebar_label: 'sumMap'
doc_type: 'reference'
sumMap {#summap}
Description {#description}
The
Map
combinator can be applied to the
sum
function to calculate the sum of values in a Map according to each key, using the
sumMap
aggregate combinator function.
Example usage {#example-usage}
In this example, we'll create a table that stores status codes and their counts for different timeslots,
where each row contains a Map of status codes to their corresponding counts. We'll use
sumMap
to calculate the total count for each status code within each timeslot.
```sql title="Query"
CREATE TABLE metrics(
date Date,
timeslot DateTime,
status Map(String, UInt64)
) ENGINE = Log;
INSERT INTO metrics VALUES
('2000-01-01', '2000-01-01 00:00:00', (['a', 'b', 'c'], [15, 25, 35])),
('2000-01-01', '2000-01-01 00:00:00', (['c', 'd', 'e'], [45, 55, 65])),
('2000-01-01', '2000-01-01 00:01:00', (['d', 'e', 'f'], [75, 85, 95])),
('2000-01-01', '2000-01-01 00:01:00', (['f', 'g', 'g'], [105, 115, 125]));
SELECT
timeslot,
sumMap(status),
FROM metrics
GROUP BY timeslot;
```
The
sumMap
function will calculate the total count for each status code within each timeslot. For example:
- In timeslot '2000-01-01 00:00:00':
- Status 'a': 15
- Status 'b': 25
- Status 'c': 35 + 45 = 80
- Status 'd': 55
- Status 'e': 65
- In timeslot '2000-01-01 00:01:00':
- Status 'd': 75
- Status 'e': 85
- Status 'f': 95 + 105 = 200
- Status 'g': 115 + 125 = 240
response title="Response"
βββββββββββββtimeslotββ¬βsumMap(status)ββββββββββββββββββββββββ
1. β 2000-01-01 00:01:00 β {'d':75,'e':85,'f':200,'g':240} β
2. β 2000-01-01 00:00:00 β {'a':15,'b':25,'c':80,'d':55,'e':65} β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ
See also {#see-also}
sum
Map combinator | {"source_file": "sumMap.md"} | [
-0.0009643809753470123,
0.018690356984734535,
0.04346086084842682,
0.03386828303337097,
-0.11768486350774765,
0.03259209170937538,
0.060680098831653595,
0.07354189455509186,
0.0016686089802533388,
0.050508998334407806,
0.01979415863752365,
-0.07716478407382965,
0.07778358459472656,
-0.0287... |
d17240fc-bf7c-4c1b-941f-efe01fbf3d71 | slug: '/examples/aggregate-function-combinators/avgState'
title: 'avgState'
description: 'Example of using the avgState combinator'
keywords: ['avg', 'state', 'combinator', 'examples', 'avgState']
sidebar_label: 'avgState'
doc_type: 'reference'
avgState {#avgState}
Description {#description}
The
State
combinator
can be applied to the
avg
function to produce an intermediate state of
AggregateFunction(avg, T)
type where
T
is the specified type for the average.
Example usage {#example-usage}
In this example, we'll look at how we can use the
AggregateFunction
type,
together with the
avgState
function to aggregate website traffic data.
First create the source table for website traffic data:
sql
CREATE TABLE raw_page_views
(
page_id UInt32,
page_name String,
response_time_ms UInt32, -- Page response time in milliseconds
viewed_at DateTime DEFAULT now()
)
ENGINE = MergeTree()
ORDER BY (page_id, viewed_at);
Create the aggregate table that will store average response times. Note that
avg
cannot use the
SimpleAggregateFunction
type as it requires a complex
state (a sum and a count). We therefore use the
AggregateFunction
type:
sql
CREATE TABLE page_performance
(
page_id UInt32,
page_name String,
avg_response_time AggregateFunction(avg, UInt32) -- Stores the state needed for avg calculation
)
ENGINE = AggregatingMergeTree()
ORDER BY page_id;
Create an Incremental materialized view that will act as an insert trigger to
new data and store the intermediate state data in the target table defined above:
sql
CREATE MATERIALIZED VIEW page_performance_mv
TO page_performance
AS SELECT
page_id,
page_name,
avgState(response_time_ms) AS avg_response_time -- Using -State combinator
FROM raw_page_views
GROUP BY page_id, page_name;
Insert some initial data into the source table, creating a part on disk:
sql
INSERT INTO raw_page_views (page_id, page_name, response_time_ms) VALUES
(1, 'Homepage', 120),
(1, 'Homepage', 135),
(2, 'Products', 95),
(2, 'Products', 105),
(3, 'About', 80),
(3, 'About', 90);
Insert some more data to create a second part on disk:
sql
INSERT INTO raw_page_views (page_id, page_name, response_time_ms) VALUES
(1, 'Homepage', 150),
(2, 'Products', 110),
(3, 'About', 70),
(4, 'Contact', 60),
(4, 'Contact', 65);
Examine the target table
page_performance
:
sql
SELECT
page_id,
page_name,
avg_response_time,
toTypeName(avg_response_time)
FROM page_performance | {"source_file": "avgState.md"} | [
-0.03098784200847149,
-0.0359821617603302,
0.003540599951520562,
0.08797239512205124,
-0.08327476680278778,
0.040349770337343216,
0.07313743233680725,
0.06764184683561325,
-0.01837029680609703,
0.025467433035373688,
-0.06274843215942383,
-0.07237912714481354,
0.03682409226894379,
-0.026715... |
315c2d54-aad4-4a1b-9904-10321c084245 | Examine the target table
page_performance
:
sql
SELECT
page_id,
page_name,
avg_response_time,
toTypeName(avg_response_time)
FROM page_performance
response
ββpage_idββ¬βpage_nameββ¬βavg_response_timeββ¬βtoTypeName(avg_response_time)βββ
β 1 β Homepage β οΏ½ β AggregateFunction(avg, UInt32) β
β 2 β Products β οΏ½ β AggregateFunction(avg, UInt32) β
β 3 β About β οΏ½ β AggregateFunction(avg, UInt32) β
β 1 β Homepage β οΏ½ β AggregateFunction(avg, UInt32) β
β 2 β Products β n β AggregateFunction(avg, UInt32) β
β 3 β About β F β AggregateFunction(avg, UInt32) β
β 4 β Contact β } β AggregateFunction(avg, UInt32) β
βββββββββββ΄ββββββββββββ΄ββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββ
Notice that the
avg_response_time
column is of type
AggregateFunction(avg, UInt32)
and stores intermediate state information. Also notice that the row data for the
avg_response_time
is not useful to us and we see strange text characters such
as
οΏ½, n, F, }
. This is the terminals attempt to display binary data as text.
The reason for this is that
AggregateFunction
types store their state in a
binary format that's optimized for efficient storage and computation, not for
human readability. This binary state contains all the information needed to
calculate the average.
To make use of it, use the
Merge
combinator:
sql
SELECT
page_id,
page_name,
avgMerge(avg_response_time) AS average_response_time_ms
FROM page_performance
GROUP BY page_id, page_name
ORDER BY page_id;
Now we see the correct averages:
response
ββpage_idββ¬βpage_nameββ¬βaverage_response_time_msββ
β 1 β Homepage β 135 β
β 2 β Products β 103.33333333333333 β
β 3 β About β 80 β
β 4 β Contact β 62.5 β
βββββββββββ΄ββββββββββββ΄βββββββββββββββββββββββββββ
See also {#see-also}
avg
State | {"source_file": "avgState.md"} | [
-0.055386997759342194,
-0.015095003880560398,
-0.052420724183321,
0.07842761278152466,
-0.014406242407858372,
-0.01811540499329567,
0.05954962596297264,
0.017409823834896088,
0.012679224833846092,
0.014271436259150505,
0.03723018243908882,
-0.041802357882261276,
0.012997196987271309,
-0.09... |
172ffe42-e3a1-4082-8924-0ee763ee874a | slug: /guides/sre/keeper/clickhouse-keeper
sidebar_label: 'Configuring ClickHouse Keeper'
sidebar_position: 10
keywords: ['Keeper', 'ZooKeeper', 'clickhouse-keeper']
description: 'ClickHouse Keeper, or clickhouse-keeper, replaces ZooKeeper and provides replication and coordination.'
title: 'ClickHouse Keeper'
doc_type: 'guide'
ClickHouse Keeper (clickhouse-keeper)
import SelfManaged from '@site/docs/_snippets/_self_managed_only_automated.md';
ClickHouse Keeper provides the coordination system for data
replication
and
distributed DDL
queries execution. ClickHouse Keeper is compatible with ZooKeeper.
Implementation details {#implementation-details}
ZooKeeper is one of the first well-known open-source coordination systems. It's implemented in Java, and has quite a simple and powerful data model. ZooKeeper's coordination algorithm, ZooKeeper Atomic Broadcast (ZAB), doesn't provide linearizability guarantees for reads, because each ZooKeeper node serves reads locally. Unlike ZooKeeper, ClickHouse Keeper is written in C++ and uses the
RAFT algorithm
implementation
. This algorithm allows linearizability for reads and writes, and has several open-source implementations in different languages.
By default, ClickHouse Keeper provides the same guarantees as ZooKeeper: linearizable writes and non-linearizable reads. It has a compatible client-server protocol, so any standard ZooKeeper client can be used to interact with ClickHouse Keeper. Snapshots and logs have an incompatible format with ZooKeeper, but the
clickhouse-keeper-converter
tool enables the conversion of ZooKeeper data to ClickHouse Keeper snapshots. The interserver protocol in ClickHouse Keeper is also incompatible with ZooKeeper so a mixed ZooKeeper / ClickHouse Keeper cluster is impossible.
ClickHouse Keeper supports Access Control Lists (ACLs) the same way as
ZooKeeper
does. ClickHouse Keeper supports the same set of permissions and has the identical built-in schemes:
world
,
auth
and
digest
. The digest authentication scheme uses the pair
username:password
, the password is encoded in Base64.
:::note
External integrations are not supported.
:::
Configuration {#configuration}
ClickHouse Keeper can be used as a standalone replacement for ZooKeeper or as an internal part of the ClickHouse server. In both cases the configuration is almost the same
.xml
file.
Keeper configuration settings {#keeper-configuration-settings}
The main ClickHouse Keeper configuration tag is
<keeper_server>
and has the following parameters: | {"source_file": "index.md"} | [
-0.0640333741903305,
-0.036463700234889984,
-0.05929073691368103,
-0.03903617337346077,
-0.04752664640545845,
-0.14853714406490326,
0.025409836322069168,
-0.03023294173181057,
-0.04630452021956444,
0.04275169596076012,
-0.014039228670299053,
0.02536352351307869,
0.13250450789928436,
-0.006... |
a473cc60-29c3-4737-83cb-d761f0997101 | | Parameter | Description | Default |
|--------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
|
tcp_port
| Port for a client to connect. |
2181
|
|
tcp_port_secure
| Secure port for an SSL connection between client and keeper-server. | - |
|
server_id
| Unique server id, each participant of the ClickHouse Keeper cluster must have a unique number (1, 2, 3, and so on). | - |
|
log_storage_path
| Path to coordination logs, just like ZooKeeper it is best to store logs on non-busy nodes. | - |
|
snapshot_storage_path
| Path to coordination snapshots. | - |
|
enable_reconfiguration
| Enable dynamic cluster reconfiguration via
reconfig | {"source_file": "index.md"} | [
0.027120506390929222,
0.0663352832198143,
-0.047471147030591965,
-0.0024363878183066845,
-0.09090075641870499,
0.03879176452755928,
0.019684690982103348,
0.05261595919728279,
-0.0025524666998535395,
-0.05526917055249214,
0.03768141195178032,
-0.05999469757080078,
-0.006582620553672314,
-0.... |
f1035bee-4589-425c-af31-190a3d919f78 | |
enable_reconfiguration
| Enable dynamic cluster reconfiguration via
reconfig
. |
False
|
|
max_memory_usage_soft_limit
| Soft limit in bytes of keeper max memory usage. |
max_memory_usage_soft_limit_ratio
*
physical_memory_amount
|
|
max_memory_usage_soft_limit_ratio
| If
max_memory_usage_soft_limit
is not set or set to zero, we use this value to define the default soft limit. |
0.9
|
|
cgroups_memory_observer_wait_time
| If
max_memory_usage_soft_limit
is not set or is set to
0
, we use this interval to observe the amount of physical memory. Once the memory amount changes, we will recalculate Keeper's memory soft limit by
max_memory_usage_soft_limit_ratio
. |
15
|
|
http_control
| Configuration of
HTTP control
interface. | - |
|
digest_enabled
| Enable real-time data consistency check |
True
|
|
create_snapshot_on_exit
| Create a snapshot during shutdown | - |
|
hostname_checks_enabled | {"source_file": "index.md"} | [
0.0023206134792417288,
-0.009074118919670582,
-0.13705375790596008,
0.09035608172416687,
-0.0368494912981987,
-0.041361596435308456,
-0.02088981121778488,
0.0513080433011055,
-0.008739849552512169,
0.012403967790305614,
0.04773642495274544,
-0.048515286296606064,
0.001728318864479661,
-0.0... |
0586f46d-56fd-4f67-96f4-15e34c66e925 | |
hostname_checks_enabled
| Enable sanity hostname checks for cluster configuration (e.g. if localhost is used with remote endpoints) |
True
|
|
four_letter_word_white_list
| White list of 4lw commands. |
conf, cons, crst, envi, ruok, srst, srvr, stat, wchs, dirs, mntr, isro, rcvr, apiv, csnp, lgif, rqld, ydld
|
|
enable_ipv6
| Enable IPv6 |
True
| | {"source_file": "index.md"} | [
0.053190041333436966,
-0.043071918189525604,
-0.096128448843956,
-0.020728228613734245,
0.0648389607667923,
-0.01564498245716095,
-0.0693085640668869,
-0.030750149860978127,
-0.07657217234373093,
0.02924102358520031,
0.04072543978691101,
-0.048537734895944595,
0.014949738048017025,
-0.0307... |
af521337-532c-43d2-85ca-65b77150d4d2 | Other common parameters are inherited from the ClickHouse server config (
listen_host
,
logger
, and so on).
Internal coordination settings {#internal-coordination-settings}
Internal coordination settings are located in the
<keeper_server>.<coordination_settings>
section and have the following parameters: | {"source_file": "index.md"} | [
0.018662715330719948,
-0.057108134031295776,
-0.12477549910545349,
-0.049581706523895264,
-0.03511141613125801,
-0.04568781331181526,
0.027004340663552284,
-0.06185847893357277,
-0.0369625948369503,
0.01675778441131115,
0.07279109209775925,
-0.0038101831451058388,
0.02945229597389698,
-0.0... |
d3bf9d4f-49b4-447a-824b-362dc8be8ce2 | | Parameter | Description | Default |
|------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
|
operation_timeout_ms
| Timeout for a single client operation (ms) |
10000
|
|
min_session_timeout_ms
| Min timeout for client session (ms) |
10000
|
|
session_timeout_ms
| Max timeout for client session (ms) |
100000
|
|
dead_session_check_period_ms
| How often ClickHouse Keeper checks for dead sessions and removes them (ms) |
500
|
|
heart_beat_interval_ms
| How often a ClickHouse Keeper leader will send heartbeats to followers (ms) |
500
|
|
election_timeout_lower_bound_ms
| If the follower does not receive a heartbeat from the leader in this interval, then it can initiate leader election. Must be less than or equal to
election_timeout_upper_bound_ms
. Ideally they shouldn't be equal. |
1000
|
|
election_timeout_upper_bound_ms | {"source_file": "index.md"} | [
0.02717316336929798,
0.06621262431144714,
-0.047387734055519104,
-0.0025231654290109873,
-0.09089458733797073,
0.03859204426407814,
0.019677266478538513,
0.05251079425215721,
-0.002602292923256755,
-0.05530773848295212,
0.037573471665382385,
-0.05997846648097038,
-0.006635729689151049,
-0.... |
35ab2660-42ca-4eb5-96f0-98e6b31c07a0 | . Ideally they shouldn't be equal. |
1000
|
|
election_timeout_upper_bound_ms
| If the follower does not receive a heartbeat from the leader in this interval, then it must initiate leader election. |
2000
|
|
rotate_log_storage_interval
| How many log records to store in a single file. |
100000
|
|
reserved_log_items
| How many coordination log records to store before compaction. |
100000
|
|
snapshot_distance
| How often ClickHouse Keeper will create new snapshots (in the number of records in logs). |
100000
|
|
snapshots_to_keep
| How many snapshots to keep. |
3
|
|
stale_log_gap
| Threshold when leader considers follower as stale and sends the snapshot to it instead of logs. |
10000
|
|
fresh_log_gap
| When node became fresh. |
200
|
|
max_requests_batch_size
| Max size of batch in requests count before it will be sent to RAFT. |
100 | {"source_file": "index.md"} | [
0.017903758212924004,
-0.014348722994327545,
-0.06053220108151436,
0.014707259833812714,
0.008577399887144566,
-0.06524113565683365,
-0.05587252974510193,
0.025171615183353424,
-0.016773149371147156,
0.06247435510158539,
0.04435829073190689,
-0.015749480575323105,
0.07513586431741714,
-0.0... |
30334682-b051-42a4-b4ee-b0bf8a5930f7 | 100
|
|
force_sync
| Call
fsync
on each write to coordination log. |
true
|
|
quorum_reads
| Execute read requests as writes through whole RAFT consensus with similar speed. |
false
|
|
raft_logs_level
| Text logging level about coordination (trace, debug, and so on). |
system default
|
|
auto_forwarding
| Allow to forward write requests from followers to the leader. |
true
|
|
shutdown_timeout
| Wait to finish internal connections and shutdown (ms). |
5000
|
|
startup_timeout
| If the server doesn't connect to other quorum participants in the specified timeout it will terminate (ms). |
30000
|
|
async_replication
| Enable async replication. All write and read guarantees are preserved while better performance is achieved. Settings is disabled by default to not break backwards compatibility |
false
|
|
latest_logs_cache_size_threshold
| Maximum total size of in-memory cache of latest log entries |
1GiB | {"source_file": "index.md"} | [
-0.0018096375279128551,
-0.03885184973478317,
-0.09859637171030045,
0.04220475256443024,
-0.07004331052303314,
-0.03758282959461212,
-0.12603729963302612,
0.008208870887756348,
-0.018123982474207878,
0.08420123159885406,
0.027401139959692955,
0.06469330936670303,
0.047305405139923096,
-0.0... |
da79a421-c9ae-4482-ad22-9e61600edeb7 | 1GiB
|
|
commit_logs_cache_size_threshold
| Maximum total size of in-memory cache of log entries needed next for commit |
500MiB
|
|
disk_move_retries_wait_ms
| How long to wait between retries after a failure which happened while a file was being moved between disks |
1000
|
|
disk_move_retries_during_init
| The amount of retries after a failure which happened while a file was being moved between disks during initialization |
100
|
|
experimental_use_rocksdb
| Use rocksdb as backend storage |
0
| | {"source_file": "index.md"} | [
0.03210820257663727,
-0.05417294055223465,
-0.027971943840384483,
0.04449402168393135,
0.005643643904477358,
-0.05329791083931923,
-0.011004874482750893,
0.06318841874599457,
-0.015132353641092777,
0.101030632853508,
0.03204077109694481,
-0.004208984784781933,
0.05166254937648773,
-0.08724... |
0631b25b-cde0-46e1-ad50-83c6711c52c3 | Quorum configuration is located in the
<keeper_server>.<raft_configuration>
section and contain servers description.
The only parameter for the whole quorum is
secure
, which enables encrypted connection for communication between quorum participants. The parameter can be set
true
if SSL connection is required for internal communication between nodes, or left unspecified otherwise.
The main parameters for each
<server>
are:
id
β Server identifier in a quorum.
hostname
β Hostname where this server is placed.
port
β Port where this server listens for connections.
can_become_leader
β Set to
false
to set up the server as a
learner
. If omitted, the value is
true
.
:::note
In the case of a change in the topology of your ClickHouse Keeper cluster (e.g., replacing a server), please make sure to keep the mapping of
server_id
to
hostname
consistent and avoid shuffling or reusing an existing
server_id
for different servers (e.g., it can happen if your rely on automation scripts to deploy ClickHouse Keeper)
If the host of a Keeper instance can change, we recommend to define and use a hostname instead of raw IP addresses. Changing hostname is equal to removing and adding the server back which in some cases can be impossible to do (e.g. not enough Keeper instances for quorum).
:::
:::note
async_replication
is disabled by default to avoid breaking backwards compatibility. If you have all your Keeper instances in a cluster running a version supporting
async_replication
(v23.9+), we recommend enabling it because it can improve performance without any downsides.
:::
Examples of configuration for quorum with three nodes can be found in
integration tests
with
test_keeper_
prefix. Example configuration for server #1:
```xml
2181
1
/var/lib/clickhouse/coordination/log
/var/lib/clickhouse/coordination/snapshots
<coordination_settings>
<operation_timeout_ms>10000</operation_timeout_ms>
<session_timeout_ms>30000</session_timeout_ms>
<raft_logs_level>trace</raft_logs_level>
</coordination_settings>
<raft_configuration>
<server>
<id>1</id>
<hostname>zoo1</hostname>
<port>9234</port>
</server>
<server>
<id>2</id>
<hostname>zoo2</hostname>
<port>9234</port>
</server>
<server>
<id>3</id>
<hostname>zoo3</hostname>
<port>9234</port>
</server>
</raft_configuration>
```
How to run {#how-to-run}
ClickHouse Keeper is bundled into the ClickHouse server package, just add configuration of
<keeper_server>
to your
/etc/your_path_to_config/clickhouse-server/config.xml
and start ClickHouse server as always. If you want to run standalone ClickHouse Keeper you can start it in a similar way with:
bash
clickhouse-keeper --config /etc/your_path_to_config/config.xml
If you don't have the symlink (
clickhouse-keeper
) you can create it or specify
keeper
as an argument to
clickhouse
: | {"source_file": "index.md"} | [
-0.023834779858589172,
-0.017584746703505516,
-0.07121702283620834,
-0.012236068025231361,
-0.10184618085622787,
-0.032371196895837784,
-0.03188347443938255,
-0.05556777864694595,
-0.015279467217624187,
0.02476729452610016,
0.031905386596918106,
-0.010372613556683064,
0.07400421053171158,
... |
7cd1fb53-d6e3-45d9-9568-dce141959ad1 | bash
clickhouse-keeper --config /etc/your_path_to_config/config.xml
If you don't have the symlink (
clickhouse-keeper
) you can create it or specify
keeper
as an argument to
clickhouse
:
bash
clickhouse keeper --config /etc/your_path_to_config/config.xml
Four letter word commands {#four-letter-word-commands}
ClickHouse Keeper also provides 4lw commands which are almost the same with Zookeeper. Each command is composed of four letters such as
mntr
,
stat
etc. There are some more interesting commands:
stat
gives some general information about the server and connected clients, while
srvr
and
cons
give extended details on server and connections respectively.
The 4lw commands has a white list configuration
four_letter_word_white_list
which has default value
conf,cons,crst,envi,ruok,srst,srvr,stat,wchs,dirs,mntr,isro,rcvr,apiv,csnp,lgif,rqld,ydld
.
You can issue the commands to ClickHouse Keeper via telnet or nc, at the client port.
bash
echo mntr | nc localhost 9181
Bellow is the detailed 4lw commands:
ruok
: Tests if server is running in a non-error state. The server will respond with
imok
if it is running. Otherwise, it will not respond at all. A response of
imok
does not necessarily indicate that the server has joined the quorum, just that the server process is active and bound to the specified client port. Use "stat" for details on state with respect to quorum and client connection information.
response
imok
mntr
: Outputs a list of variables that could be used for monitoring the health of the cluster.
response
zk_version v21.11.1.1-prestable-7a4a0b0edef0ad6e0aa662cd3b90c3f4acf796e7
zk_avg_latency 0
zk_max_latency 0
zk_min_latency 0
zk_packets_received 68
zk_packets_sent 68
zk_num_alive_connections 1
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count 4
zk_watch_count 1
zk_ephemerals_count 0
zk_approximate_data_size 723
zk_open_file_descriptor_count 310
zk_max_file_descriptor_count 10240
zk_followers 0
zk_synced_followers 0
srvr
: Lists full details for the server.
response
ClickHouse Keeper version: v21.11.1.1-prestable-7a4a0b0edef0ad6e0aa662cd3b90c3f4acf796e7
Latency min/avg/max: 0/0/0
Received: 2
Sent : 2
Connections: 1
Outstanding: 0
Zxid: 34
Mode: leader
Node count: 4
stat
: Lists brief details for the server and connected clients.
response
ClickHouse Keeper version: v21.11.1.1-prestable-7a4a0b0edef0ad6e0aa662cd3b90c3f4acf796e7
Clients:
192.168.1.1:52852(recved=0,sent=0)
192.168.1.1:52042(recved=24,sent=48)
Latency min/avg/max: 0/0/0
Received: 4
Sent : 4
Connections: 1
Outstanding: 0
Zxid: 36
Mode: leader
Node count: 4
srst
: Reset server statistics. The command will affect the result of
srvr
,
mntr
and
stat
.
response
Server stats reset.
conf
: Print details about serving configuration. | {"source_file": "index.md"} | [
0.03001239337027073,
-0.04456938058137894,
-0.09058164805173874,
-0.02729310654103756,
-0.014593029394745827,
-0.04135803505778313,
0.047779373824596405,
-0.0049676354974508286,
-0.029920313507318497,
0.03218415752053261,
0.028064364567399025,
-0.043348100036382675,
0.04761512950062752,
0.... |
f1396a49-2bcf-409d-9bc2-160dc5ddce5d | srst
: Reset server statistics. The command will affect the result of
srvr
,
mntr
and
stat
.
response
Server stats reset.
conf
: Print details about serving configuration.
response
server_id=1
tcp_port=2181
four_letter_word_white_list=*
log_storage_path=./coordination/logs
snapshot_storage_path=./coordination/snapshots
max_requests_batch_size=100
session_timeout_ms=30000
operation_timeout_ms=10000
dead_session_check_period_ms=500
heart_beat_interval_ms=500
election_timeout_lower_bound_ms=1000
election_timeout_upper_bound_ms=2000
reserved_log_items=1000000000000000
snapshot_distance=10000
auto_forwarding=true
shutdown_timeout=5000
startup_timeout=240000
raft_logs_level=information
snapshots_to_keep=3
rotate_log_storage_interval=100000
stale_log_gap=10000
fresh_log_gap=200
max_requests_batch_size=100
quorum_reads=false
force_sync=false
compress_logs=true
compress_snapshots_with_zstd_format=true
configuration_change_tries_count=20
cons
: List full connection/session details for all clients connected to this server. Includes information on numbers of packets received/sent, session id, operation latencies, last operation performed, etc...
response
192.168.1.1:52163(recved=0,sent=0,sid=0xffffffffffffffff,lop=NA,est=1636454787393,to=30000,lzxid=0xffffffffffffffff,lresp=0,llat=0,minlat=0,avglat=0,maxlat=0)
192.168.1.1:52042(recved=9,sent=18,sid=0x0000000000000001,lop=List,est=1636454739887,to=30000,lcxid=0x0000000000000005,lzxid=0x0000000000000005,lresp=1636454739892,llat=0,minlat=0,avglat=0,maxlat=0)
crst
: Reset connection/session statistics for all connections.
response
Connection stats reset.
envi
: Print details about serving environment
response
Environment:
clickhouse.keeper.version=v21.11.1.1-prestable-7a4a0b0edef0ad6e0aa662cd3b90c3f4acf796e7
host.name=ZBMAC-C02D4054M.local
os.name=Darwin
os.arch=x86_64
os.version=19.6.0
cpu.count=12
user.name=root
user.home=/Users/JackyWoo/
user.dir=/Users/JackyWoo/project/jd/clickhouse/cmake-build-debug/programs/
user.tmp=/var/folders/b4/smbq5mfj7578f2jzwn602tt40000gn/T/
dirs
: Shows the total size of snapshot and log files in bytes
response
snapshot_dir_size: 0
log_dir_size: 3875
isro
: Tests if server is running in read-only mode. The server will respond with
ro
if in read-only mode or
rw
if not in read-only mode.
response
rw
wchs
: Lists brief information on watches for the server.
response
1 connections watching 1 paths
Total watches:1
wchc
: Lists detailed information on watches for the server, by session. This outputs a list of sessions (connections) with associated watches (paths). Note, depending on the number of watches this operation may be expensive (impact server performance), use it carefully.
response
0x0000000000000001
/clickhouse/task_queue/ddl | {"source_file": "index.md"} | [
-0.015300778672099113,
-0.004889458883553743,
-0.0936804935336113,
0.05022574961185455,
0.0098881172016263,
-0.05894302576780319,
-0.04398948326706886,
0.07256574928760529,
-0.004998732823878527,
0.05845194309949875,
0.025901883840560913,
0.06338880956172943,
0.036294255405664444,
-0.03686... |
91d9dab2-ca1f-46c7-a164-0231f3efb016 | response
0x0000000000000001
/clickhouse/task_queue/ddl
wchp
: Lists detailed information on watches for the server, by path. This outputs a list of paths (znodes) with associated sessions. Note, depending on the number of watches this operation may be expensive (i.e., impact server performance), use it carefully.
response
/clickhouse/task_queue/ddl
0x0000000000000001
dump
: Lists the outstanding sessions and ephemeral nodes. This only works on the leader.
response
Sessions dump (2):
0x0000000000000001
0x0000000000000002
Sessions with Ephemerals (1):
0x0000000000000001
/clickhouse/task_queue/ddl
csnp
: Schedule a snapshot creation task. Return the last committed log index of the scheduled snapshot if success or
Failed to schedule snapshot creation task.
if failed. Note that
lgif
command can help you determine whether the snapshot is done.
response
100
lgif
: Keeper log information.
first_log_idx
: my first log index in log store;
first_log_term
: my first log term;
last_log_idx
: my last log index in log store;
last_log_term
: my last log term;
last_committed_log_idx
: my last committed log index in state machine;
leader_committed_log_idx
: leader's committed log index from my perspective;
target_committed_log_idx
: target log index should be committed to;
last_snapshot_idx
: the largest committed log index in last snapshot.
response
first_log_idx 1
first_log_term 1
last_log_idx 101
last_log_term 1
last_committed_log_idx 100
leader_committed_log_idx 101
target_committed_log_idx 101
last_snapshot_idx 50
rqld
: Request to become new leader. Return
Sent leadership request to leader.
if request sent or
Failed to send leadership request to leader.
if request not sent. Note that if node is already leader the outcome is same as the request is sent.
response
Sent leadership request to leader.
ftfl
: Lists all feature flags and whether they are enabled for the Keeper instance.
response
filtered_list 1
multi_read 1
check_not_exists 0
ydld
: Request to yield leadership and become follower. If the server receiving the request is leader, it will pause write operations first, wait until the successor (current leader can never be successor) finishes the catch-up of the latest log, and then resign. The successor will be chosen automatically. Return
Sent yield leadership request to leader.
if request sent or
Failed to send yield leadership request to leader.
if request not sent. Note that if node is already follower the outcome is same as the request is sent.
response
Sent yield leadership request to leader.
pfev
: Returns the values for all collected events. For each event it returns event name, event value, and event's description. | {"source_file": "index.md"} | [
-0.03536424785852432,
0.008521844632923603,
-0.05868826434016228,
0.035288628190755844,
0.05182778462767601,
-0.09093309193849564,
0.016251863911747932,
-0.03465067595243454,
0.05315997451543808,
0.06271572411060333,
-0.05519172549247742,
-0.025120342150330544,
-0.015477913431823254,
-0.02... |
dc75f46b-608f-4f9f-9779-b12ca349ea26 | response
Sent yield leadership request to leader.
pfev
: Returns the values for all collected events. For each event it returns event name, event value, and event's description.
response
FileOpen 62 Number of files opened.
Seek 4 Number of times the 'lseek' function was called.
ReadBufferFromFileDescriptorRead 126 Number of reads (read/pread) from a file descriptor. Does not include sockets.
ReadBufferFromFileDescriptorReadFailed 0 Number of times the read (read/pread) from a file descriptor have failed.
ReadBufferFromFileDescriptorReadBytes 178846 Number of bytes read from file descriptors. If the file is compressed, this will show the compressed data size.
WriteBufferFromFileDescriptorWrite 7 Number of writes (write/pwrite) to a file descriptor. Does not include sockets.
WriteBufferFromFileDescriptorWriteFailed 0 Number of times the write (write/pwrite) to a file descriptor have failed.
WriteBufferFromFileDescriptorWriteBytes 153 Number of bytes written to file descriptors. If the file is compressed, this will show compressed data size.
FileSync 2 Number of times the F_FULLFSYNC/fsync/fdatasync function was called for files.
DirectorySync 0 Number of times the F_FULLFSYNC/fsync/fdatasync function was called for directories.
FileSyncElapsedMicroseconds 12756 Total time spent waiting for F_FULLFSYNC/fsync/fdatasync syscall for files.
DirectorySyncElapsedMicroseconds 0 Total time spent waiting for F_FULLFSYNC/fsync/fdatasync syscall for directories.
ReadCompressedBytes 0 Number of bytes (the number of bytes before decompression) read from compressed sources (files, network).
CompressedReadBufferBlocks 0 Number of compressed blocks (the blocks of data that are compressed independent of each other) read from compressed sources (files, network).
CompressedReadBufferBytes 0 Number of uncompressed bytes (the number of bytes after decompression) read from compressed sources (files, network).
AIOWrite 0 Number of writes with Linux or FreeBSD AIO interface
AIOWriteBytes 0 Number of bytes written with Linux or FreeBSD AIO interface
...
HTTP control {#http-control}
ClickHouse Keeper provides an HTTP interface to check if a replica is ready to receive traffic. It may be used in cloud environments, such as
Kubernetes
.
Example of configuration that enables
/ready
endpoint:
xml
<clickhouse>
<keeper_server>
<http_control>
<port>9182</port>
<readiness>
<endpoint>/ready</endpoint>
</readiness>
</http_control>
</keeper_server>
</clickhouse>
Feature flags {#feature-flags} | {"source_file": "index.md"} | [
-0.04623842611908913,
0.014254868030548096,
-0.05086003988981247,
0.04859064519405365,
0.022779161110520363,
-0.041407261043787,
-0.05582709237933159,
0.09195445477962494,
0.050819236785173416,
0.062048014253377914,
-0.06916220486164093,
0.04681376740336418,
-0.025130925700068474,
-0.06930... |
cc23ed42-cb3d-47b4-8827-347fbb196dd6 | Feature flags {#feature-flags}
Keeper is fully compatible with ZooKeeper and its clients, but it also introduces some unique features and request types that can be used by ClickHouse client.
Because those features can introduce backward incompatible change, most of them are disabled by default and can be enabled using
keeper_server.feature_flags
config.
All features can be disabled explicitly.
If you want to enable a new feature for your Keeper cluster, we recommend you to first update all the Keeper instances in the cluster to a version that supports the feature and then enable the feature itself.
Example of feature flag config that disables
multi_read
and enables
check_not_exists
:
xml
<clickhouse>
<keeper_server>
<feature_flags>
<multi_read>0</multi_read>
<check_not_exists>1</check_not_exists>
</feature_flags>
</keeper_server>
</clickhouse>
The following features are available:
| Feature | Description | Default |
|------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
|
multi_read
| Support for read multi request |
1
|
|
filtered_list
| Support for list request which filters results by the type of node (ephemeral or persistent) |
1
|
|
check_not_exists
| Support for
CheckNotExists
request, which asserts that node doesn't exist |
1
|
|
create_if_not_exists
| Support for
CreateIfNotExists
request, which will try to create a node if it doesn't exist. If it exists, no changes are applied and
ZOK
is returned |
1
|
|
remove_recursive
| Support for
RemoveRecursive
request, which removes the node along with its subtree |
1
|
:::note
Some of the feature flags are enabled by default from version 25.7.
The recommended way of upgrading Keeper to 25.7+ is to first upgrade to version 24.9+.
:::
Migration from ZooKeeper {#migration-from-zookeeper}
Seamless migration from ZooKeeper to ClickHouse Keeper is not possible. You have to stop your ZooKeeper cluster, convert data, and start ClickHouse Keeper.
clickhouse-keeper-converter
tool allows converting ZooKeeper logs and snapshots to ClickHouse Keeper snapshot. It works only with ZooKeeper > 3.4. Steps for migration:
Stop all ZooKeeper nodes. | {"source_file": "index.md"} | [
0.048392314463853836,
-0.029997345060110092,
-0.029882201924920082,
0.022376056760549545,
0.07518574595451355,
-0.03511068597435951,
0.0009022298618219793,
-0.07411220669746399,
-0.048793379217386246,
0.004937094170600176,
0.012806514278054237,
-0.06740295886993408,
-0.008619909174740314,
... |
3437e764-e90a-414e-a24a-74813a5565f3 | Stop all ZooKeeper nodes.
Optional, but recommended: find ZooKeeper leader node, start and stop it again. It will force ZooKeeper to create a consistent snapshot.
Run
clickhouse-keeper-converter
on a leader, for example:
bash
clickhouse-keeper-converter --zookeeper-logs-dir /var/lib/zookeeper/version-2 --zookeeper-snapshots-dir /var/lib/zookeeper/version-2 --output-dir /path/to/clickhouse/keeper/snapshots
Copy snapshot to ClickHouse server nodes with a configured
keeper
or start ClickHouse Keeper instead of ZooKeeper. The snapshot must persist on all nodes, otherwise, empty nodes can be faster and one of them can become a leader.
:::note
keeper-converter
tool is not available from the Keeper standalone binary.
If you have ClickHouse installed, you can use the binary directly:
bash
clickhouse keeper-converter ...
Otherwise, you can
download the binary
and run the tool as described above without installing ClickHouse.
:::
Recovering after losing quorum {#recovering-after-losing-quorum}
Because ClickHouse Keeper uses Raft it can tolerate certain amount of node crashes depending on the cluster size. \
E.g. for a 3-node cluster, it will continue working correctly if only 1 node crashes.
Cluster configuration can be dynamically configured, but there are some limitations. Reconfiguration relies on Raft also
so to add/remove a node from the cluster you need to have a quorum. If you lose too many nodes in your cluster at the same time without any chance
of starting them again, Raft will stop working and not allow you to reconfigure your cluster using the conventional way.
Nevertheless, ClickHouse Keeper has a recovery mode which allows you to forcefully reconfigure your cluster with only 1 node.
This should be done only as your last resort if you cannot start your nodes again, or start a new instance on the same endpoint.
Important things to note before continuing:
- Make sure that the failed nodes cannot connect to the cluster again.
- Do not start any of the new nodes until it's specified in the steps. | {"source_file": "index.md"} | [
0.0000011787318499045796,
-0.008154612965881824,
-0.01368881668895483,
0.0014942871639505029,
0.10345587134361267,
-0.05204416438937187,
-0.023770781233906746,
-0.04179239645600319,
-0.0027086155023425817,
0.06532604247331619,
-0.0218820720911026,
-0.05279717594385147,
0.041825126856565475,
... |
ed3b38ff-2ef3-4055-9b2c-696d2e068cfe | Important things to note before continuing:
- Make sure that the failed nodes cannot connect to the cluster again.
- Do not start any of the new nodes until it's specified in the steps.
After making sure that the above things are true, you need to do following:
1. Pick a single Keeper node to be your new leader. Be aware that the data of that node will be used for the entire cluster, so we recommend using a node with the most up-to-date state.
2. Before doing anything else, make a backup of the
log_storage_path
and
snapshot_storage_path
folders of the picked node.
3. Reconfigure the cluster on all of the nodes you want to use.
4. Send the four letter command
rcvr
to the node you picked which will move the node to the recovery mode OR stop Keeper instance on the picked node and start it again with the
--force-recovery
argument.
5. One by one, start Keeper instances on the new nodes making sure that
mntr
returns
follower
for the
zk_server_state
before starting the next one.
6. While in the recovery mode, the leader node will return error message for
mntr
command until it achieves quorum with the new nodes and refuse any requests from the client and the followers.
7. After quorum is achieved, the leader node will return to the normal mode of operation, accepting all the requests using Raft-verify with
mntr
which should return
leader
for the
zk_server_state
.
Using disks with Keeper {#using-disks-with-keeper}
Keeper supports a subset of
external disks
for storing snapshots, log files, and the state file.
Supported types of disks are:
- s3_plain
- s3
- local
Following is an example of disk definitions contained inside a config. | {"source_file": "index.md"} | [
0.0003996500454377383,
-0.034740347415208817,
-0.006795936729758978,
0.044129323214292526,
0.03467661142349243,
-0.007487059570848942,
-0.029959384351968765,
0.025413667783141136,
-0.06493289768695831,
0.060788948088884354,
0.039492975920438766,
-0.055435970425605774,
0.11902158707380295,
... |
bac6d3d2-59c8-4f2a-b0c1-8a8dbde9673b | Supported types of disks are:
- s3_plain
- s3
- local
Following is an example of disk definitions contained inside a config.
xml
<clickhouse>
<storage_configuration>
<disks>
<log_local>
<type>local</type>
<path>/var/lib/clickhouse/coordination/logs/</path>
</log_local>
<log_s3_plain>
<type>s3_plain</type>
<endpoint>https://some_s3_endpoint/logs/</endpoint>
<access_key_id>ACCESS_KEY</access_key_id>
<secret_access_key>SECRET_KEY</secret_access_key>
</log_s3_plain>
<snapshot_local>
<type>local</type>
<path>/var/lib/clickhouse/coordination/snapshots/</path>
</snapshot_local>
<snapshot_s3_plain>
<type>s3_plain</type>
<endpoint>https://some_s3_endpoint/snapshots/</endpoint>
<access_key_id>ACCESS_KEY</access_key_id>
<secret_access_key>SECRET_KEY</secret_access_key>
</snapshot_s3_plain>
<state_s3_plain>
<type>s3_plain</type>
<endpoint>https://some_s3_endpoint/state/</endpoint>
<access_key_id>ACCESS_KEY</access_key_id>
<secret_access_key>SECRET_KEY</secret_access_key>
</state_s3_plain>
</disks>
</storage_configuration>
</clickhouse>
To use a disk for logs
keeper_server.log_storage_disk
config should be set to the name of disk.
To use a disk for snapshots
keeper_server.snapshot_storage_disk
config should be set to the name of disk.
Additionally, different disks can be used for the latest logs or snapshots by using
keeper_server.latest_log_storage_disk
and
keeper_server.latest_snapshot_storage_disk
respectively.
In that case, Keeper will automatically move files to correct disks when new logs or snapshots are created.
To use a disk for state file,
keeper_server.state_storage_disk
config should be set to the name of disk.
Moving files between disks is safe and there is no risk of losing data if Keeper stops in the middle of transfer.
Until the file is completely moved to the new disk, it's not deleted from the old one.
Keeper with
keeper_server.coordination_settings.force_sync
set to
true
(
true
by default) cannot satisfy some guarantees for all types of disks.
Right now, only disks of type
local
support persistent sync.
If
force_sync
is used,
log_storage_disk
should be a
local
disk if
latest_log_storage_disk
is not used.
If
latest_log_storage_disk
is used, it should always be a
local
disk.
If
force_sync
is disabled, disks of all types can be used in any setup.
A possible storage setup for a Keeper instance could look like following:
```xml
log_s3_plain
log_local | {"source_file": "index.md"} | [
-0.006709245499223471,
-0.06786800920963287,
-0.10125748813152313,
-0.077529177069664,
0.07862396538257599,
-0.0071176327764987946,
-0.04582966864109039,
-0.019927706569433212,
0.010333714075386524,
0.01128486730158329,
0.06972622126340866,
0.02930191159248352,
0.03197995573282242,
-0.0393... |
07843b27-4de5-4020-8846-f0b0b1ab5bae | A possible storage setup for a Keeper instance could look like following:
```xml
log_s3_plain
log_local
<snapshot_storage_disk>snapshot_s3_plain</snapshot_storage_disk>
<latest_snapshot_storage_disk>snapshot_local</latest_snapshot_storage_disk>
</keeper_server>
```
This instance will store all but the latest logs on disk
log_s3_plain
, while the latest log will be on the
log_local
disk.
Same logic applies for snapshots, all but the latest snapshots will be stored on
snapshot_s3_plain
, while the latest snapshot will be on the
snapshot_local
disk.
Changing disk setup {#changing-disk-setup}
:::important
Before applying a new disk setup, manually back up all Keeper logs and snapshots.
:::
If a tiered disk setup is defined (using separate disks for the latest files), Keeper will try to automatically move files to the correct disks on startup.
The same guarantee is applied as before; until the file is completely moved to the new disk, it's not deleted from the old one, so multiple restarts
can be safely done.
If it's necessary to move files to a completely new disk (or move from a 2-disk setup to a single disk setup), it's possible to use multiple definitions of
keeper_server.old_snapshot_storage_disk
and
keeper_server.old_log_storage_disk
.
The following config shows how we can move from the previous 2-disk setup to a completely new single-disk setup:
```xml
log_local
log_s3_plain
log_local2
<old_snapshot_storage_disk>snapshot_s3_plain</old_snapshot_storage_disk>
<old_snapshot_storage_disk>snapshot_local</old_snapshot_storage_disk>
<snapshot_storage_disk>snapshot_local2</snapshot_storage_disk>
</keeper_server>
```
On startup, all the log files will be moved from
log_local
and
log_s3_plain
to the
log_local2
disk.
Also, all the snapshot files will be moved from
snapshot_local
and
snapshot_s3_plain
to the
snapshot_local2
disk.
Configuring logs cache {#configuring-logs-cache}
To minimize the amount of data read from disk, Keeper caches log entries in memory.
If requests are large, log entries will take too much memory so the amount of cached logs is capped.
The limit is controlled with these two configs:
-
latest_logs_cache_size_threshold
- total size of latest logs stored in cache
-
commit_logs_cache_size_threshold
- total size of subsequent logs that need to be committed next
If the default values are too big, you can reduce the memory usage by reducing these two configs.
:::note
You can use
pfev
command to check amount of logs read from each cache and from a file.
You can also use metrics from Prometheus endpoint to track the current size of both caches.
:::
Prometheus {#prometheus}
Keeper can expose metrics data for scraping from
Prometheus
.
Settings:
endpoint
β HTTP endpoint for scraping metrics by the Prometheus server. Start from '/'.
port
β Port for
endpoint
. | {"source_file": "index.md"} | [
-0.022607019171118736,
-0.007394731510430574,
-0.04496586322784424,
-0.02572118118405342,
0.0782518982887268,
-0.0672401562333107,
-0.02775576151907444,
-0.027662716805934906,
0.07179951667785645,
0.06914126127958298,
0.06325661391019821,
0.028328923508524895,
0.08518754690885544,
-0.00593... |
dff7d1a1-99cd-4a86-a979-4eff337c2204 | Settings:
endpoint
β HTTP endpoint for scraping metrics by the Prometheus server. Start from '/'.
port
β Port for
endpoint
.
metrics
β Flag that sets to expose metrics from the
system.metrics
table.
events
β Flag that sets to expose metrics from the
system.events
table.
asynchronous_metrics
β Flag that sets to expose current metrics values from the
system.asynchronous_metrics
table.
Example
xml
<clickhouse>
<listen_host>0.0.0.0</listen_host>
<http_port>8123</http_port>
<tcp_port>9000</tcp_port>
<!-- highlight-start -->
<prometheus>
<endpoint>/metrics</endpoint>
<port>9363</port>
<metrics>true</metrics>
<events>true</events>
<asynchronous_metrics>true</asynchronous_metrics>
</prometheus>
<!-- highlight-end -->
</clickhouse>
Check (replace
127.0.0.1
with the IP addr or hostname of your ClickHouse server):
bash
curl 127.0.0.1:9363/metrics
Please also see the ClickHouse Cloud
Prometheus integration
.
ClickHouse Keeper user guide {#clickhouse-keeper-user-guide}
This guide provides simple and minimal settings to configure ClickHouse Keeper with an example on how to test distributed operations. This example is performed using 3 nodes on Linux.
1. Configure nodes with Keeper settings {#1-configure-nodes-with-keeper-settings}
Install 3 ClickHouse instances on 3 hosts (
chnode1
,
chnode2
,
chnode3
). (View the
Quick Start
for details on installing ClickHouse.)
On each node, add the following entry to allow external communication through the network interface.
xml
<listen_host>0.0.0.0</listen_host>
Add the following ClickHouse Keeper configuration to all three servers updating the
<server_id>
setting for each server; for
chnode1
would be
1
,
chnode2
would be
2
, etc.
```xml
9181
1
/var/lib/clickhouse/coordination/log
/var/lib/clickhouse/coordination/snapshots
<coordination_settings>
<operation_timeout_ms>10000</operation_timeout_ms>
<session_timeout_ms>30000</session_timeout_ms>
<raft_logs_level>warning</raft_logs_level>
</coordination_settings>
<raft_configuration>
<server>
<id>1</id>
<hostname>chnode1.domain.com</hostname>
<port>9234</port>
</server>
<server>
<id>2</id>
<hostname>chnode2.domain.com</hostname>
<port>9234</port>
</server>
<server>
<id>3</id>
<hostname>chnode3.domain.com</hostname>
<port>9234</port>
</server>
</raft_configuration>
```
These are the basic settings used above: | {"source_file": "index.md"} | [
-0.04503529518842697,
0.0164627842605114,
-0.0716710090637207,
0.009631406515836716,
-0.009258802980184555,
-0.13551978766918182,
-0.04154539108276367,
-0.01871122233569622,
0.011346522718667984,
-0.022554831579327583,
0.016544217243790627,
-0.1015581339597702,
-0.029484979808330536,
-0.00... |
e3e61d62-6b03-4a88-adf8-bed73d7b6287 | ```
These are the basic settings used above:
|Parameter |Description |Example |
|----------|------------------------------|---------------------|
|tcp_port |port to be used by clients of keeper|9181 default equivalent of 2181 as in zookeeper|
|server_id| identifier for each ClickHouse Keeper server used in raft configuration| 1|
|coordination_settings| section to parameters such as timeouts| timeouts: 10000, log level: trace|
|server |definition of server participating|list of each server definition|
|raft_configuration| settings for each server in the keeper cluster| server and settings for each|
|id |numeric id of the server for keeper services|1|
|hostname |hostname, IP or FQDN of each server in the keeper cluster|
chnode1.domain.com
|
|port|port to listen on for interserver keeper connections|9234|
Enable the Zookeeper component. It will use the ClickHouse Keeper engine:
xml
<zookeeper>
<node>
<host>chnode1.domain.com</host>
<port>9181</port>
</node>
<node>
<host>chnode2.domain.com</host>
<port>9181</port>
</node>
<node>
<host>chnode3.domain.com</host>
<port>9181</port>
</node>
</zookeeper>
These are the basic settings used above:
|Parameter |Description |Example |
|----------|------------------------------|---------------------|
|node |list of nodes for ClickHouse Keeper connections|settings entry for each server|
|host|hostname, IP or FQDN of each ClickHouse keeper node|
chnode1.domain.com
|
|port|ClickHouse Keeper client port| 9181|
Restart ClickHouse and verify that each Keeper instance is running. Execute the following command on each server. The
ruok
command returns
imok
if Keeper is running and healthy:
bash
# echo ruok | nc localhost 9181; echo
imok
The
system
database has a table named
zookeeper
that contains the details of your ClickHouse Keeper instances. Let's view the table:
sql
SELECT *
FROM system.zookeeper
WHERE path IN ('/', '/clickhouse')
The table looks like: | {"source_file": "index.md"} | [
0.011450255289673805,
-0.019803117960691452,
-0.11050821840763092,
-0.013007531873881817,
-0.006431553512811661,
-0.05641113966703415,
-0.0018420207779854536,
-0.021910646930336952,
-0.04223102703690529,
0.044497374445199966,
0.02117639034986496,
-0.023932501673698425,
0.034961070865392685,
... |
40881f7e-5bec-43e9-929f-f48d6c505c20 | The table looks like:
response
ββnameββββββββ¬βvalueββ¬βczxidββ¬βmzxidββ¬βββββββββββββββctimeββ¬βββββββββββββββmtimeββ¬βversionββ¬βcversionββ¬βaversionββ¬βephemeralOwnerββ¬βdataLengthββ¬βnumChildrenββ¬βpzxidββ¬βpathβββββββββ
β clickhouse β β 124 β 124 β 2022-03-07 00:49:34 β 2022-03-07 00:49:34 β 0 β 2 β 0 β 0 β 0 β 2 β 5693 β / β
β task_queue β β 125 β 125 β 2022-03-07 00:49:34 β 2022-03-07 00:49:34 β 0 β 1 β 0 β 0 β 0 β 1 β 126 β /clickhouse β
β tables β β 5693 β 5693 β 2022-03-07 00:49:34 β 2022-03-07 00:49:34 β 0 β 3 β 0 β 0 β 0 β 3 β 6461 β /clickhouse β
ββββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββββββββ΄βββββββββββββ΄ββββββββββββββ΄ββββββββ΄ββββββββββββββ
2. Configure a cluster in ClickHouse {#2--configure-a-cluster-in-clickhouse}
Let's configure a simple cluster with 2 shards and only one replica on 2 of the nodes. The third node will be used to achieve a quorum for the requirement in ClickHouse Keeper. Update the configuration on
chnode1
and
chnode2
. The following cluster defines 1 shard on each node for a total of 2 shards with no replication. In this example, some of the data will be on node and some will be on the other node:
xml
<remote_servers>
<cluster_2S_1R>
<shard>
<replica>
<host>chnode1.domain.com</host>
<port>9000</port>
<user>default</user>
<password>ClickHouse123!</password>
</replica>
</shard>
<shard>
<replica>
<host>chnode2.domain.com</host>
<port>9000</port>
<user>default</user>
<password>ClickHouse123!</password>
</replica>
</shard>
</cluster_2S_1R>
</remote_servers>
|Parameter |Description |Example |
|----------|------------------------------|---------------------|
|shard |list of replicas on the cluster definition|list of replicas for each shard|
|replica|list of settings for each replica|settings entries for each replica|
|host|hostname, IP or FQDN of server that will host a replica shard|
chnode1.domain.com
|
|port|port used to communicate using the native tcp protocol|9000|
|user|username that will be used to authenticate to the cluster instances|default|
|password|password for the user define to allow connections to cluster instances|
ClickHouse123!
|
Restart ClickHouse and verify the cluster was created:
bash
SHOW clusters;
You should see your cluster: | {"source_file": "index.md"} | [
-0.014955244027078152,
-0.04208356887102127,
-0.04630333185195923,
0.010151175782084465,
-0.035669635981321335,
-0.03754901885986328,
0.03034546785056591,
-0.0767143964767456,
-0.017618954181671143,
0.08741053938865662,
0.021542245522141457,
-0.10238365828990936,
0.05033578351140022,
-0.01... |
2984ee27-1748-4e3e-88eb-baebca8042b7 | Restart ClickHouse and verify the cluster was created:
bash
SHOW clusters;
You should see your cluster:
response
ββclusterββββββββ
β cluster_2S_1R β
βββββββββββββββββ
3. Create and test distributed table {#3-create-and-test-distributed-table}
Create a new database on the new cluster using ClickHouse client on
chnode1
. The
ON CLUSTER
clause automatically creates the database on both nodes.
sql
CREATE DATABASE db1 ON CLUSTER 'cluster_2S_1R';
Create a new table on the
db1
database. Once again,
ON CLUSTER
creates the table on both nodes.
sql
CREATE TABLE db1.table1 on cluster 'cluster_2S_1R'
(
`id` UInt64,
`column1` String
)
ENGINE = MergeTree
ORDER BY column1
On the
chnode1
node, add a couple of rows:
sql
INSERT INTO db1.table1
(id, column1)
VALUES
(1, 'abc'),
(2, 'def')
Add a couple of rows on the
chnode2
node:
sql
INSERT INTO db1.table1
(id, column1)
VALUES
(3, 'ghi'),
(4, 'jkl')
Notice that running a
SELECT
statement on each node only shows the data on that node. For example, on
chnode1
:
sql
SELECT *
FROM db1.table1
```response
Query id: 7ef1edbc-df25-462b-a9d4-3fe6f9cb0b6d
ββidββ¬βcolumn1ββ
β 1 β abc β
β 2 β def β
ββββββ΄ββββββββββ
2 rows in set. Elapsed: 0.006 sec.
```
On
chnode2
:
6.
sql
SELECT *
FROM db1.table1
```response
Query id: c43763cc-c69c-4bcc-afbe-50e764adfcbf
ββidββ¬βcolumn1ββ
β 3 β ghi β
β 4 β jkl β
ββββββ΄ββββββββββ
```
You can create a
Distributed
table to represent the data on the two shards. Tables with the
Distributed
table engine do not store any data of their own, but allow distributed query processing on multiple servers. Reads hit all the shards, and writes can be distributed across the shards. Run the following query on
chnode1
:
sql
CREATE TABLE db1.dist_table (
id UInt64,
column1 String
)
ENGINE = Distributed(cluster_2S_1R,db1,table1)
Notice querying
dist_table
returns all four rows of data from the two shards:
sql
SELECT *
FROM db1.dist_table
```response
Query id: 495bffa0-f849-4a0c-aeea-d7115a54747a
ββidββ¬βcolumn1ββ
β 1 β abc β
β 2 β def β
ββββββ΄ββββββββββ
ββidββ¬βcolumn1ββ
β 3 β ghi β
β 4 β jkl β
ββββββ΄ββββββββββ
4 rows in set. Elapsed: 0.018 sec.
```
Summary {#summary}
This guide demonstrated how to set up a cluster using ClickHouse Keeper. With ClickHouse Keeper, you can configure clusters and define distributed tables that can be replicated across shards.
Configuring ClickHouse Keeper with unique paths {#configuring-clickhouse-keeper-with-unique-paths}
Description {#description} | {"source_file": "index.md"} | [
0.05141822621226311,
-0.127030611038208,
-0.07424689829349518,
0.03208862990140915,
-0.03629738837480545,
-0.06711957603693008,
-0.010866329073905945,
-0.025018073618412018,
-0.06796108186244965,
0.020526140928268433,
0.0703950971364975,
-0.07196632772684097,
0.11726139485836029,
-0.103910... |
a2a1c08d-0f7d-4d24-96da-cc4b15504812 | Configuring ClickHouse Keeper with unique paths {#configuring-clickhouse-keeper-with-unique-paths}
Description {#description}
This article describes how to use the built-in
{uuid}
macro setting
to create unique entries in ClickHouse Keeper or ZooKeeper. Unique
paths help when creating and dropping tables frequently because
this avoids having to wait several minutes for Keeper garbage collection
to remove path entries as each time a path is created a new
uuid
is used
in that path; paths are never reused.
Example environment {#example-environment}
A three node cluster that will be configured to have ClickHouse Keeper
on all three nodes, and ClickHouse on two of the nodes. This provides
ClickHouse Keeper with three nodes (including a tiebreaker node), and
a single ClickHouse shard made up of two replicas.
|node|description|
|-----|-----|
|
chnode1.marsnet.local
|data node - cluster
cluster_1S_2R
|
|
chnode2.marsnet.local
|data node - cluster
cluster_1S_2R
|
|
chnode3.marsnet.local
| ClickHouse Keeper tie breaker node|
Example config for cluster:
xml
<remote_servers>
<cluster_1S_2R>
<shard>
<replica>
<host>chnode1.marsnet.local</host>
<port>9440</port>
<user>default</user>
<password>ClickHouse123!</password>
<secure>1</secure>
</replica>
<replica>
<host>chnode2.marsnet.local</host>
<port>9440</port>
<user>default</user>
<password>ClickHouse123!</password>
<secure>1</secure>
</replica>
</shard>
</cluster_1S_2R>
</remote_servers>
Procedures to set up tables to use
{uuid}
{#procedures-to-set-up-tables-to-use-uuid}
Configure Macros on each server
example for server 1:
xml
<macros>
<shard>1</shard>
<replica>replica_1</replica>
</macros>
:::note
Notice that we define macros for
shard
and
replica
, but that
{uuid}
is not defined here, it is built-in and there is no need to define.
:::
Create a Database
sql
CREATE DATABASE db_uuid
ON CLUSTER 'cluster_1S_2R'
ENGINE Atomic;
```response
CREATE DATABASE db_uuid ON CLUSTER cluster_1S_2R
ENGINE = Atomic
Query id: 07fb7e65-beb4-4c30-b3ef-bd303e5c42b5
ββhostβββββββββββββββββββ¬βportββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β chnode2.marsnet.local β 9440 β 0 β β 1 β 0 β
β chnode1.marsnet.local β 9440 β 0 β β 0 β 0 β
βββββββββββββββββββββββββ΄βββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
```
Create a table on the cluster using the macros and
{uuid} | {"source_file": "index.md"} | [
0.017741959542036057,
-0.050363920629024506,
-0.025568371638655663,
0.013835727237164974,
0.029695071280002594,
-0.07761798799037933,
0.04252869263291359,
-0.10376341640949249,
-0.014902221038937569,
0.024809498339891434,
0.0028201297391206026,
-0.060823529958724976,
0.1017375960946083,
-0... |
3e771bef-16c0-46d2-a1f9-aa11a4ea91bd | Create a table on the cluster using the macros and
{uuid}
sql
CREATE TABLE db_uuid.uuid_table1 ON CLUSTER 'cluster_1S_2R'
(
id UInt64,
column1 String
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/db_uuid/{uuid}', '{replica}' )
ORDER BY (id);
``response
CREATE TABLE db_uuid.uuid_table1 ON CLUSTER cluster_1S_2R
(
id
UInt64,
column1` String
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/db_uuid/{uuid}', '{replica}')
ORDER BY id
Query id: 8f542664-4548-4a02-bd2a-6f2c973d0dc4
ββhostβββββββββββββββββββ¬βportββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β chnode1.marsnet.local β 9440 β 0 β β 1 β 0 β
β chnode2.marsnet.local β 9440 β 0 β β 0 β 0 β
βββββββββββββββββββββββββ΄βββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
```
Create a distributed table
sql
CREATE TABLE db_uuid.dist_uuid_table1 ON CLUSTER 'cluster_1S_2R'
(
id UInt64,
column1 String
)
ENGINE = Distributed('cluster_1S_2R', 'db_uuid', 'uuid_table1' );
``response
CREATE TABLE db_uuid.dist_uuid_table1 ON CLUSTER cluster_1S_2R
(
id
UInt64,
column1` String
)
ENGINE = Distributed('cluster_1S_2R', 'db_uuid', 'uuid_table1')
Query id: 3bc7f339-ab74-4c7d-a752-1ffe54219c0e
ββhostβββββββββββββββββββ¬βportββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β chnode2.marsnet.local β 9440 β 0 β β 1 β 0 β
β chnode1.marsnet.local β 9440 β 0 β β 0 β 0 β
βββββββββββββββββββββββββ΄βββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
```
Testing {#testing}
Insert data into first node (e.g
chnode1
)
sql
INSERT INTO db_uuid.uuid_table1
( id, column1)
VALUES
( 1, 'abc');
```response
INSERT INTO db_uuid.uuid_table1 (id, column1) FORMAT Values
Query id: 0f178db7-50a6-48e2-9a1b-52ed14e6e0f9
Ok.
1 row in set. Elapsed: 0.033 sec.
```
Insert data into second node (e.g.,
chnode2
)
sql
INSERT INTO db_uuid.uuid_table1
( id, column1)
VALUES
( 2, 'def');
```response
INSERT INTO db_uuid.uuid_table1 (id, column1) FORMAT Values
Query id: edc6f999-3e7d-40a0-8a29-3137e97e3607
Ok.
1 row in set. Elapsed: 0.529 sec.
```
View records using distributed table
sql
SELECT * FROM db_uuid.dist_uuid_table1;
```response
SELECT *
FROM db_uuid.dist_uuid_table1
Query id: 6cbab449-9e7f-40fe-b8c2-62d46ba9f5c8
ββidββ¬βcolumn1ββ
β 1 β abc β
ββββββ΄ββββββββββ
ββidββ¬βcolumn1ββ
β 2 β def β
ββββββ΄ββββββββββ
2 rows in set. Elapsed: 0.007 sec.
```
Alternatives {#alternatives}
The default replication path can be defined beforehand by macros and using also
{uuid}
Set default for tables on each node
xml
<default_replica_path>/clickhouse/tables/{shard}/db_uuid/{uuid}</default_replica_path>
<default_replica_name>{replica}</default_replica_name> | {"source_file": "index.md"} | [
0.024920010939240456,
-0.019914813339710236,
-0.044666945934295654,
0.04869076609611511,
-0.018362965434789658,
-0.049337610602378845,
-0.038082681596279144,
-0.05020846426486969,
-0.04832380264997482,
0.05265815928578377,
0.043450579047203064,
-0.06639990955591202,
0.11522882431745529,
-0... |
b2da03c1-0dcf-489b-8bc2-1ca682e639bd | Set default for tables on each node
xml
<default_replica_path>/clickhouse/tables/{shard}/db_uuid/{uuid}</default_replica_path>
<default_replica_name>{replica}</default_replica_name>
:::tip
You can also define a macro
{database}
on each node if nodes are used for certain databases.
:::
Create table without explicit parameters:
sql
CREATE TABLE db_uuid.uuid_table1 ON CLUSTER 'cluster_1S_2R'
(
id UInt64,
column1 String
)
ENGINE = ReplicatedMergeTree
ORDER BY (id);
``response
CREATE TABLE db_uuid.uuid_table1 ON CLUSTER cluster_1S_2R
(
id
UInt64,
column1` String
)
ENGINE = ReplicatedMergeTree
ORDER BY id
Query id: ab68cda9-ae41-4d6d-8d3b-20d8255774ee
ββhostβββββββββββββββββββ¬βportββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β chnode2.marsnet.local β 9440 β 0 β β 1 β 0 β
β chnode1.marsnet.local β 9440 β 0 β β 0 β 0 β
βββββββββββββββββββββββββ΄βββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
2 rows in set. Elapsed: 1.175 sec.
```
Verify it used the settings used in default config
sql
SHOW CREATE TABLE db_uuid.uuid_table1;
```response
SHOW CREATE TABLE db_uuid.uuid_table1
CREATE TABLE db_uuid.uuid_table1
(
id
UInt64,
column1
String
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/db_uuid/{uuid}', '{replica}')
ORDER BY id
1 row in set. Elapsed: 0.003 sec.
```
Troubleshooting {#troubleshooting}
Example command to get table information and UUID:
sql
SELECT * FROM system.tables
WHERE database = 'db_uuid' AND name = 'uuid_table1';
Example command to get information about the table in zookeeper with UUID for the table above
sql
SELECT * FROM system.zookeeper
WHERE path = '/clickhouse/tables/1/db_uuid/9e8a3cc2-0dec-4438-81a7-c3e63ce2a1cf/replicas';
:::note
Database must be
Atomic
, if upgrading from a previous version, the
default
database is likely of
Ordinary
type.
:::
To check:
For example,
sql
SELECT name, engine FROM system.databases WHERE name = 'db_uuid';
```response
SELECT
name,
engine
FROM system.databases
WHERE name = 'db_uuid'
Query id: b047d459-a1d2-4016-bcf9-3e97e30e49c2
ββnameβββββ¬βengineββ
β db_uuid β Atomic β
βββββββββββ΄βββββββββ
1 row in set. Elapsed: 0.004 sec.
```
ClickHouse Keeper dynamic reconfiguration {#reconfiguration}
Description {#description-1}
ClickHouse Keeper partially supports ZooKeeper
reconfig
command for dynamic cluster reconfiguration if
keeper_server.enable_reconfiguration
is turned on.
:::note
If this setting is turned off, you may reconfigure the cluster by altering the replica's
raft_configuration
section manually. Make sure you the edit files on all replicas as only the leader will apply changes.
Alternatively, you can send a
reconfig
query through any ZooKeeper-compatible client.
::: | {"source_file": "index.md"} | [
0.05372006818652153,
-0.059051837772130966,
-0.05278067663311958,
0.03712482377886772,
-0.029357630759477615,
-0.047603946179151535,
-0.047745443880558014,
-0.033578112721443176,
-0.056688565760850906,
0.04880340397357941,
0.05744459107518196,
-0.06911411881446838,
0.14078882336616516,
-0.... |
de98abc6-c9fe-48fc-9f97-37b1d80eb3fa | section manually. Make sure you the edit files on all replicas as only the leader will apply changes.
Alternatively, you can send a
reconfig
query through any ZooKeeper-compatible client.
:::
A virtual node
/keeper/config
contains last committed cluster configuration in the following format:
text
server.id = server_host:server_port[;server_type][;server_priority]
server.id2 = ...
...
Each server entry is delimited by a newline.
server_type
is either
participant
or
learner
(
learner
does not participate in leader elections).
server_priority
is a non-negative integer telling
which nodes should be prioritised on leader elections
.
Priority of 0 means server will never be a leader.
Example:
sql
:) get /keeper/config
server.1=zoo1:9234;participant;1
server.2=zoo2:9234;participant;1
server.3=zoo3:9234;participant;1
You can use
reconfig
command to add new servers, remove existing ones, and change existing servers'
priorities, here are examples (using
clickhouse-keeper-client
):
```bash
Add two new servers
reconfig add "server.5=localhost:123,server.6=localhost:234;learner"
Remove two other servers
reconfig remove "3,4"
Change existing server priority to 8
reconfig add "server.5=localhost:5123;participant;8"
```
And here are examples for
kazoo
:
```python
Add two new servers, remove two other servers
reconfig(joining="server.5=localhost:123,server.6=localhost:234;learner", leaving="3,4")
Change existing server priority to 8
reconfig(joining="server.5=localhost:5123;participant;8", leaving=None)
```
Servers in
joining
should be in server format described above. Server entries should be delimited by commas.
While adding new servers, you can omit
server_priority
(default value is 1) and
server_type
(default value
is
participant
).
If you want to change existing server priority, add it to
joining
with target priority.
Server host, port, and type must be equal to existing server configuration.
Servers are added and removed in order of appearance in
joining
and
leaving
.
All updates from
joining
are processed before updates from
leaving
.
There are some caveats in Keeper reconfiguration implementation:
Only incremental reconfiguration is supported. Requests with non-empty
new_members
are declined.
ClickHouse Keeper implementation relies on NuRaft API to change membership dynamically. NuRaft has a way to
add a single server or remove a single server, one at a time. This means each change to configuration
(each part of
joining
, each part of
leaving
) must be decided on separately. Thus there is no bulk
reconfiguration available as it would be misleading for end users.
Changing server type (participant/learner) isn't possible either as it's not supported by NuRaft, and
the only way would be to remove and add server, which again would be misleading.
You cannot use the returned
znodestat
value. | {"source_file": "index.md"} | [
0.04026099294424057,
-0.022360850125551224,
-0.014474703930318356,
0.004493021406233311,
-0.014479324221611023,
0.00010051994468085468,
-0.03737431392073631,
-0.04328763112425804,
-0.05257200822234154,
0.12383688241243362,
-0.02551317773759365,
-0.040344756096601486,
0.08267788589000702,
-... |
e7187946-9cb4-4fda-843d-aa8af6ac93b0 | You cannot use the returned
znodestat
value.
The
from_version
field is not used. All requests with set
from_version
are declined.
This is due to the fact
/keeper/config
is a virtual node, which means it is not stored in
persistent storage, but rather generated on-the-fly with the specified node config for every request.
This decision was made as to not duplicate data as NuRaft already stores this config.
Unlike ZooKeeper, there is no way to wait on cluster reconfiguration by submitting a
sync
command.
New config will be
eventually
applied but with no time guarantees.
reconfig
command may fail for various reasons. You can check cluster's state and see whether the update
was applied.
Converting a single-node keeper into a cluster {#converting-a-single-node-keeper-into-a-cluster}
Sometimes it's necessary to extend experimental keeper node into a cluster. Here's a scheme of how to do it step-by-step for 3 nodes cluster:
IMPORTANT
: new nodes must be added in batches less than the current quorum, otherwise they will elect a leader among them. In this example one by one.
The existing keeper node must have
keeper_server.enable_reconfiguration
configuration parameter turned on.
Start a second node with the full new configuration of keeper cluster.
After it's started, add it to the node 1 using
reconfig
.
Now, start a third node and add it using
reconfig
.
Update the
clickhouse-server
configuration by adding new keeper node there and restart it to apply the changes.
Update the raft configuration of the node 1 and, optionally, restart it.
To get confident with the process, here's a
sandbox repository
.
Unsupported features {#unsupported-features}
While ClickHouse Keeper aims to be fully compatible with ZooKeeper, there are some features that are currently not implemented (although development is ongoing):
create
does not support returning
Stat
object
create
does not support
TTL
addWatch
does not work with
PERSISTENT
watches
removeWatch
and
removeAllWatches
are not supported
setWatches
is not supported
Creating
CONTAINER
type znodes is not supported
SASL authentication
is not supported | {"source_file": "index.md"} | [
-0.010607986710965633,
-0.014293944463133812,
-0.04909704625606537,
0.055867694318294525,
0.08328285068273544,
-0.08896644413471222,
-0.025802360847592354,
-0.011980943381786346,
-0.04944416880607605,
0.06337990611791611,
0.022939521819353104,
-0.0597018226981163,
0.055073387920856476,
-0.... |
7b2970e0-1f40-4fe0-afe8-065ca69d8744 | sidebar_label: 'SSL user certificate authentication'
sidebar_position: 3
slug: /guides/sre/ssl-user-auth
title: 'Configuring SSL User Certificate for Authentication'
description: 'This guide provides simple and minimal settings to configure authentication with SSL user certificates.'
doc_type: 'guide'
keywords: ['ssl', 'authentication', 'security', 'certificates', 'user management']
Configuring SSL user certificate for authentication
import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md';
This guide provides simple and minimal settings to configure authentication with SSL user certificates. The tutorial builds on the
Configuring SSL-TLS user guide
.
:::note
SSL user authentication is supported when using the
https
,
native
,
mysql
, and
postgresql
interfaces.
ClickHouse nodes need
<verificationMode>strict</verificationMode>
set for secure authentication (although
relaxed
will work for testing purposes).
If you use AWS NLB with the MySQL interface, you have to ask AWS support to enable the undocumented option:
I would like to be able to configure our NLB proxy protocol v2 as below
proxy_protocol_v2.client_to_server.header_placement,Value=on_first_ack
.
:::
1. Create SSL user certificates {#1-create-ssl-user-certificates}
:::note
This example uses self-signed certificates with a self-signed CA. For production environments, create a CSR and submit to your PKI team or certificate provider to obtain a proper certificate.
:::
Generate a Certificate Signing Request (CSR) and key. The basic format is the following:
bash
openssl req -newkey rsa:2048 -nodes -subj "/CN=<my_host>:<my_user>" -keyout <my_cert_name>.key -out <my_cert_name>.csr
In this example, we'll use this for the domain and user that will be used in this sample environment:
bash
openssl req -newkey rsa:2048 -nodes -subj "/CN=chnode1.marsnet.local:cert_user" -keyout chnode1_cert_user.key -out chnode1_cert_user.csr
:::note
The CN is arbitrary and any string can be used as an identifier for the certificate. It is used when creating the user in the following steps.
:::
Generate and sign the new user certificate that will be used for authentication. The basic format is the following:
bash
openssl x509 -req -in <my_cert_name>.csr -out <my_cert_name>.crt -CA <my_ca_cert>.crt -CAkey <my_ca_cert>.key -days 365
In this example, we'll use this for the domain and user that will be used in this sample environment:
bash
openssl x509 -req -in chnode1_cert_user.csr -out chnode1_cert_user.crt -CA marsnet_ca.crt -CAkey marsnet_ca.key -days 365
2. Create a SQL user and grant permissions {#2-create-a-sql-user-and-grant-permissions}
:::note
For details on how to enable SQL users and set roles, refer to
Defining SQL Users and Roles
user guide.
::: | {"source_file": "ssl-user-auth.md"} | [
-0.04127507656812668,
0.03944989666342735,
-0.07071986049413681,
0.029300455003976822,
-0.011859945952892303,
-0.03331698477268219,
0.020908433943986893,
0.0005644130287691951,
-0.04152331128716469,
-0.06844231486320496,
0.025620918720960617,
-0.04363930970430374,
0.0915413349866867,
0.014... |
377dce8c-889e-4b7c-9c3c-ae88b63c9213 | :::note
For details on how to enable SQL users and set roles, refer to
Defining SQL Users and Roles
user guide.
:::
Create a SQL user defined to use the certificate authentication:
sql
CREATE USER cert_user IDENTIFIED WITH ssl_certificate CN 'chnode1.marsnet.local:cert_user';
Grant privileges to the new certificate user:
sql
GRANT ALL ON *.* TO cert_user WITH GRANT OPTION;
:::note
The user is granted full admin privileges in this exercise for demonstration purposes. Refer to the ClickHouse
RBAC documentation
for permissions settings.
:::
:::note
We recommend using SQL to define users and roles. However, if you are currently defining users and roles in configuration files, the user will look like:
xml
<users>
<cert_user>
<ssl_certificates>
<common_name>chnode1.marsnet.local:cert_user</common_name>
</ssl_certificates>
<networks>
<ip>::/0</ip>
</networks>
<profile>default</profile>
<access_management>1</access_management>
<!-- additional options-->
</cert_user>
</users>
:::
3. Testing {#3-testing}
Copy the user certificate, user key and CA certificate to a remote node.
Configure OpenSSL in the ClickHouse
client config
with certificate and paths.
xml
<openSSL>
<client>
<certificateFile>my_cert_name.crt</certificateFile>
<privateKeyFile>my_cert_name.key</privateKeyFile>
<caConfig>my_ca_cert.crt</caConfig>
</client>
</openSSL>
Run
clickhouse-client
.
bash
clickhouse-client --user <my_user> --query 'SHOW TABLES'
:::note
Note that the password passed to clickhouse-client is ignored when a certificate is specified in the config.
:::
4. Testing HTTP {#4-testing-http}
Copy the user certificate, user key and CA certificate to a remote node.
Use
curl
to test a sample SQL command. The basic format is:
bash
echo 'SHOW TABLES' | curl 'https://<clickhouse_node>:8443' --cert <my_cert_name>.crt --key <my_cert_name>.key --cacert <my_ca_cert>.crt -H "X-ClickHouse-SSL-Certificate-Auth: on" -H "X-ClickHouse-User: <my_user>" --data-binary @-
For example:
bash
echo 'SHOW TABLES' | curl 'https://chnode1:8443' --cert chnode1_cert_user.crt --key chnode1_cert_user.key --cacert marsnet_ca.crt -H "X-ClickHouse-SSL-Certificate-Auth: on" -H "X-ClickHouse-User: cert_user" --data-binary @-
The output will be similar to the following:
response
INFORMATION_SCHEMA
default
information_schema
system
:::note
Notice that no password was specified, the certificate is used in lieu of a password and is how ClickHouse will authenticate the user.
:::
Summary {#summary} | {"source_file": "ssl-user-auth.md"} | [
0.013667454943060875,
-0.053364019840955734,
-0.1707341969013214,
0.03131672367453575,
-0.09755631536245346,
-0.00311585096642375,
0.060913506895303726,
0.014355371706187725,
-0.11476359516382217,
-0.01975582353770733,
0.005600631702691317,
-0.04887009412050247,
0.1375018209218979,
0.04665... |
f32522e5-eb32-4ffd-addd-5df3dd9dc139 | :::note
Notice that no password was specified, the certificate is used in lieu of a password and is how ClickHouse will authenticate the user.
:::
Summary {#summary}
This article showed the basics of creating and configuring a user for SSL certificate authentication. This method can be used with
clickhouse-client
or any clients which support the
https
interface and where HTTP headers can be set. The generated certificate and key should be kept private and with limited access since the certificate is used to authenticate and authorize the user for operations on the ClickHouse database. Treat the certificate and key as if they were passwords. | {"source_file": "ssl-user-auth.md"} | [
-0.06409861892461777,
0.04745084419846535,
-0.11013221740722656,
-0.024485047906637192,
-0.10198706388473511,
-0.028515156358480453,
0.026008639484643936,
-0.033028364181518555,
0.0010772465029731393,
-0.07224015146493912,
0.028052043169736862,
-0.025556134060025215,
0.07665009051561356,
0... |
948bd421-5da5-4fa2-9601-1c58ad5406ee | slug: /operations/access-rights
sidebar_position: 1
sidebar_label: 'Users and roles'
title: 'Access Control and Account Management'
keywords: ['ClickHouse Cloud', 'Access Control', 'User Management', 'RBAC', 'Security']
description: 'Describes access control and account management in ClickHouse Cloud'
doc_type: 'guide'
Creating users and roles in ClickHouse
ClickHouse supports access control management based on
RBAC
approach.
ClickHouse access entities:
-
User account
-
Role
-
Row Policy
-
Settings Profile
-
Quota
You can configure access entities using:
SQL-driven workflow.
You need to
enable
this functionality.
Server
configuration files
users.xml
and
config.xml
.
We recommend using SQL-driven workflow. Both of the configuration methods work simultaneously, so if you use the server configuration files for managing accounts and access rights, you can smoothly switch to SQL-driven workflow.
:::note
You can't manage the same access entity by both configuration methods simultaneously.
:::
:::note
If you are looking to manage ClickHouse Cloud console users, please refer to this
page
:::
To see all users, roles, profiles, etc. and all their grants use
SHOW ACCESS
statement.
Overview {#access-control-usage}
By default, the ClickHouse server provides the
default
user account which is not allowed using SQL-driven access control and account management but has all the rights and permissions. The
default
user account is used in any cases when the username is not defined, for example, at login from client or in distributed queries. In distributed query processing a default user account is used, if the configuration of the server or cluster does not specify the
user and password
properties.
If you just started using ClickHouse, consider the following scenario:
Enable
SQL-driven access control and account management for the
default
user.
Log in to the
default
user account and create all the required users. Don't forget to create an administrator account (
GRANT ALL ON *.* TO admin_user_account WITH GRANT OPTION
).
Restrict permissions
for the
default
user and disable SQL-driven access control and account management for it.
Properties of current solution {#access-control-properties}
You can grant permissions for databases and tables even if they do not exist.
If a table is deleted, all the privileges that correspond to this table are not revoked. This means that even if you create a new table with the same name later, all the privileges remain valid. To revoke privileges corresponding to the deleted table, you need to execute, for example, the
REVOKE ALL PRIVILEGES ON db.table FROM ALL
query.
There are no lifetime settings for privileges.
User account {#user-account-management}
A user account is an access entity that allows to authorize someone in ClickHouse. A user account contains:
Identification information. | {"source_file": "index.md"} | [
0.013002901338040829,
-0.07934901863336563,
-0.09487150609493256,
0.030197037383913994,
-0.09625352919101715,
0.04362761229276657,
0.104023277759552,
-0.07081294804811478,
-0.05078369379043579,
0.05047561973333359,
0.015655800700187683,
-0.01615663431584835,
0.09700534492731094,
-0.0337191... |
bc5034af-fe1f-4a23-9f70-a5177ec92f8f | User account {#user-account-management}
A user account is an access entity that allows to authorize someone in ClickHouse. A user account contains:
Identification information.
Privileges
that define the scope of queries the user can execute.
Hosts are allowed to connect to the ClickHouse server.
Assigned and default roles.
Settings with their constraints applied by default at user login.
Assigned settings profiles.
Privileges can be granted to a user account by the
GRANT
query or by assigning
roles
. To revoke privileges from a user, ClickHouse provides the
REVOKE
query. To list privileges for a user, use the
SHOW GRANTS
statement.
Management queries:
CREATE USER
ALTER USER
DROP USER
SHOW CREATE USER
SHOW USERS
Settings applying {#access-control-settings-applying}
Settings can be configured differently: for a user account, in its granted roles and in settings profiles. At user login, if a setting is configured for different access entities, the value and constraints of this setting are applied as follows (from higher to lower priority):
User account settings.
The settings for the default roles of the user account. If a setting is configured in some roles, then order of the setting application is undefined.
The settings from settings profiles assigned to a user or to its default roles. If a setting is configured in some profiles, then order of setting application is undefined.
Settings applied to the entire server by default or from the
default profile
.
Role {#role-management}
A role is a container for access entities that can be granted to a user account.
A role contains:
Privileges
Settings and constraints
List of assigned roles
Management queries:
CREATE ROLE
ALTER ROLE
DROP ROLE
SET ROLE
SET DEFAULT ROLE
SHOW CREATE ROLE
SHOW ROLES
Privileges can be granted to a role by the
GRANT
query. To revoke privileges from a role ClickHouse provides the
REVOKE
query.
Row policy {#row-policy-management}
Row policy is a filter that defines which of the rows are available to a user or a role. Row policy contains filters for one particular table, as well as a list of roles and/or users which should use this row policy.
:::note
Row policies makes sense only for users with readonly access. If users can modify table or copy partitions between tables, it defeats the restrictions of row policies.
:::
Management queries:
CREATE ROW POLICY
ALTER ROW POLICY
DROP ROW POLICY
SHOW CREATE ROW POLICY
SHOW POLICIES
Settings profile {#settings-profiles-management}
Settings profile is a collection of
settings
. Settings profile contains settings and constraints, as well as a list of roles and/or users to which this profile is applied.
Management queries:
CREATE SETTINGS PROFILE
ALTER SETTINGS PROFILE
DROP SETTINGS PROFILE
SHOW CREATE SETTINGS PROFILE
SHOW PROFILES
Quota {#quotas-management} | {"source_file": "index.md"} | [
-0.014355819672346115,
0.015355868265032768,
-0.028805838897824287,
0.06481882184743881,
-0.09840067476034164,
0.027759019285440445,
0.12287882715463638,
-0.036949411034584045,
-0.06490227580070496,
-0.022073714062571526,
0.0248640738427639,
-0.019852308556437492,
0.10245303064584732,
0.00... |
60633b9f-8be4-4f47-ba79-3db75ade5dab | Management queries:
CREATE SETTINGS PROFILE
ALTER SETTINGS PROFILE
DROP SETTINGS PROFILE
SHOW CREATE SETTINGS PROFILE
SHOW PROFILES
Quota {#quotas-management}
Quota limits resource usage. See
Quotas
.
Quota contains a set of limits for some durations, as well as a list of roles and/or users which should use this quota.
Management queries:
CREATE QUOTA
ALTER QUOTA
DROP QUOTA
SHOW CREATE QUOTA
SHOW QUOTA
SHOW QUOTAS
Enabling SQL-driven access control and account management {#enabling-access-control}
Setup a directory for configuration storage.
ClickHouse stores access entity configurations in the folder set in the
access_control_path
server configuration parameter.
Enable SQL-driven access control and account management for at least one user account.
By default, SQL-driven access control and account management is disabled for all users. You need to configure at least one user in the
users.xml
configuration file and set the values of the
access_management
,
named_collection_control
,
show_named_collections
, and
show_named_collections_secrets
settings to 1.
Defining SQL users and roles {#defining-sql-users-and-roles}
:::tip
If you are working in ClickHouse Cloud, please see
Cloud access management
.
:::
This article shows the basics of defining SQL users and roles and applying those privileges and permissions to databases, tables, rows, and columns.
Enabling SQL user mode {#enabling-sql-user-mode}
Enable SQL user mode in the
users.xml
file under the
<default>
user:
xml
<access_management>1</access_management>
<named_collection_control>1</named_collection_control>
<show_named_collections>1</show_named_collections>
<show_named_collections_secrets>1</show_named_collections_secrets>
:::note
The
default
user is the only user that gets created with a fresh install, and is also the account used for internode communications, by default.
In production, it is recommended to disable this user once the inter-node communication has been configured with a SQL admin user and internode communications have been set with
<secret>
, cluster credentials, and/or internode HTTP and transport protocol credentials since the
default
account is used for internode communication.
:::
Restart the nodes to apply the changes.
Start the ClickHouse client:
sql
clickhouse-client --user default --password <password>
Defining users {#defining-users}
Create a SQL administrator account:
sql
CREATE USER clickhouse_admin IDENTIFIED BY 'password';
Grant the new user full administrative rights
sql
GRANT ALL ON *.* TO clickhouse_admin WITH GRANT OPTION;
Alter permissions {#alter-permissions}
This article is intended to provide you with a better understanding of how to define permissions, and how permissions work when using
ALTER
statements for privileged users. | {"source_file": "index.md"} | [
0.03964398801326752,
-0.03413449972867966,
-0.08896404504776001,
0.07276182621717453,
-0.08946678787469864,
0.05545007437467575,
0.0982237234711647,
-0.008103299885988235,
-0.02467694878578186,
0.06618257611989975,
-0.011567749083042145,
-0.060253869742155075,
0.09697358310222626,
-0.00697... |
63900727-b7cb-47d5-b5e2-4d51ced9dca5 | This article is intended to provide you with a better understanding of how to define permissions, and how permissions work when using
ALTER
statements for privileged users.
The
ALTER
statements are divided into several categories, some of which are hierarchical and some of which are not and must be explicitly defined.
Example DB, table and user configuration
1. With an admin user, create a sample user
sql
CREATE USER my_user IDENTIFIED BY 'password';
Create sample database
sql
CREATE DATABASE my_db;
Create a sample table
sql
CREATE TABLE my_db.my_table (id UInt64, column1 String) ENGINE = MergeTree() ORDER BY id;
Create a sample admin user to grant/revoke privileges
sql
CREATE USER my_alter_admin IDENTIFIED BY 'password';
:::note
To grant or revoke permissions, the admin user must have the
WITH GRANT OPTION
privilege.
For example:
sql
GRANT ALTER ON my_db.* WITH GRANT OPTION
To
GRANT
or
REVOKE
privileges, the user must have those privileges themselves first.
:::
Granting or Revoking Privileges
The
ALTER
hierarchy:
response
βββ ALTER (only for table and view)/
β βββ ALTER TABLE/
β β βββ ALTER UPDATE
β β βββ ALTER DELETE
β β βββ ALTER COLUMN/
β β β βββ ALTER ADD COLUMN
β β β βββ ALTER DROP COLUMN
β β β βββ ALTER MODIFY COLUMN
β β β βββ ALTER COMMENT COLUMN
β β β βββ ALTER CLEAR COLUMN
β β β βββ ALTER RENAME COLUMN
β β βββ ALTER INDEX/
β β β βββ ALTER ORDER BY
β β β βββ ALTER SAMPLE BY
β β β βββ ALTER ADD INDEX
β β β βββ ALTER DROP INDEX
β β β βββ ALTER MATERIALIZE INDEX
β β β βββ ALTER CLEAR INDEX
β β βββ ALTER CONSTRAINT/
β β β βββ ALTER ADD CONSTRAINT
β β β βββ ALTER DROP CONSTRAINT
β β βββ ALTER TTL/
β β β βββ ALTER MATERIALIZE TTL
β β βββ ALTER SETTINGS
β β βββ ALTER MOVE PARTITION
β β βββ ALTER FETCH PARTITION
β β βββ ALTER FREEZE PARTITION
β βββ ALTER LIVE VIEW/
β βββ ALTER LIVE VIEW REFRESH
β βββ ALTER LIVE VIEW MODIFY QUERY
βββ ALTER DATABASE
βββ ALTER USER
βββ ALTER ROLE
βββ ALTER QUOTA
βββ ALTER [ROW] POLICY
βββ ALTER [SETTINGS] PROFILE
Granting
ALTER
Privileges to a User or Role
Using an
GRANT ALTER on *.* TO my_user
will only affect top-level
ALTER TABLE
and
ALTER VIEW
, other
ALTER
statements must be individually granted or revoked.
for example, granting basic
ALTER
privilege:
sql
GRANT ALTER ON my_db.my_table TO my_user;
Resulting set of privileges:
sql
SHOW GRANTS FOR my_user;
```response
SHOW GRANTS FOR my_user
Query id: 706befbc-525e-4ec1-a1a2-ba2508cc09e3
ββGRANTS FOR my_userββββββββββββββββββββββββββββββββββββββββββββ
β GRANT ALTER TABLE, ALTER VIEW ON my_db.my_table TO my_user β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | {"source_file": "index.md"} | [
0.007655371446162462,
0.005985187832266092,
-0.040864549577236176,
0.0002783700474537909,
-0.09238409996032715,
-0.015301593579351902,
0.031247595325112343,
0.044520530849695206,
-0.028411608189344406,
0.027898374944925308,
0.019747119396924973,
-0.02962399646639824,
0.06324959546327591,
-... |
a12b950d-7ef2-4a6d-b263-300f6ed5cd3c | This will grant all permissions under
ALTER TABLE
and
ALTER VIEW
from the example above, however, it will not grant certain other
ALTER
permissions such as
ALTER ROW POLICY
(Refer back to the hierarchy and you will see that
ALTER ROW POLICY
is not a child of
ALTER TABLE
or
ALTER VIEW
). Those must be explicitly granted or revoked.
If only a subset of
ALTER
permissions is needed then each can be granted separately, if there are sub-privileges to that permission then those would be automatically granted also.
For example:
sql
GRANT ALTER COLUMN ON my_db.my_table TO my_user;
Grants would be set as:
sql
SHOW GRANTS FOR my_user;
```response
SHOW GRANTS FOR my_user
Query id: 47b3d03f-46ac-4385-91ec-41119010e4e2
ββGRANTS FOR my_userβββββββββββββββββββββββββββββββββ
β GRANT ALTER COLUMN ON default.my_table TO my_user β
βββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.004 sec.
```
This also gives the following sub-privileges:
sql
ALTER ADD COLUMN
ALTER DROP COLUMN
ALTER MODIFY COLUMN
ALTER COMMENT COLUMN
ALTER CLEAR COLUMN
ALTER RENAME COLUMN
Revoking
ALTER
privileges from Users and Roles
The
REVOKE
statement works similarly to the
GRANT
statement.
If a user/role was granted a sub-privilege, you can either revoke that sub-privilege directly or revoke the higher-level privilege it inherits from.
For example, if the user was granted
ALTER ADD COLUMN
sql
GRANT ALTER ADD COLUMN ON my_db.my_table TO my_user;
```response
GRANT ALTER ADD COLUMN ON my_db.my_table TO my_user
Query id: 61fe0fdc-1442-4cd6-b2f3-e8f2a853c739
Ok.
0 rows in set. Elapsed: 0.002 sec.
```
sql
SHOW GRANTS FOR my_user;
```response
SHOW GRANTS FOR my_user
Query id: 27791226-a18f-46c8-b2b4-a9e64baeb683
ββGRANTS FOR my_userβββββββββββββββββββββββββββββββββββ
β GRANT ALTER ADD COLUMN ON my_db.my_table TO my_user β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
A privilege can be revoked individually:
sql
REVOKE ALTER ADD COLUMN ON my_db.my_table FROM my_user;
Or can be revoked from any of the upper levels (revoke all of the COLUMN sub privileges):
response
REVOKE ALTER COLUMN ON my_db.my_table FROM my_user;
```response
REVOKE ALTER COLUMN ON my_db.my_table FROM my_user
Query id: b882ba1b-90fb-45b9-b10f-3cda251e2ccc
Ok.
0 rows in set. Elapsed: 0.002 sec.
```
sql
SHOW GRANTS FOR my_user;
```response
SHOW GRANTS FOR my_user
Query id: e7d341de-de65-490b-852c-fa8bb8991174
Ok.
0 rows in set. Elapsed: 0.003 sec.
```
Additional
The privileges must be granted by a user that not only has the
WITH GRANT OPTION
but also has the privileges themselves.
To grant an admin user the privilege and also allow them to administer a set of privileges
Below is an example:
sql
GRANT SELECT, ALTER COLUMN ON my_db.my_table TO my_alter_admin WITH GRANT OPTION;
Now the user can grant or revoke
ALTER COLUMN
and all sub-privileges.
Testing | {"source_file": "index.md"} | [
-0.060554806143045425,
-0.02158491313457489,
0.017302371561527252,
-0.016332274302840233,
-0.0473417192697525,
0.023172300308942795,
0.04584449902176857,
-0.036180246621370316,
-0.05202044919133186,
0.007474815007299185,
-0.007241232320666313,
-0.0684940442442894,
0.051419030874967575,
-0.... |
dd92a48b-cc2a-4331-a861-763864d55d54 | sql
GRANT SELECT, ALTER COLUMN ON my_db.my_table TO my_alter_admin WITH GRANT OPTION;
Now the user can grant or revoke
ALTER COLUMN
and all sub-privileges.
Testing
Add the
SELECT
privilege
sql
GRANT SELECT ON my_db.my_table TO my_user;
Add the add column privilege to the user
sql
GRANT ADD COLUMN ON my_db.my_table TO my_user;
Log in with the restricted user
bash
clickhouse-client --user my_user --password password --port 9000 --host <your_clickhouse_host>
Test adding a column
sql
ALTER TABLE my_db.my_table ADD COLUMN column2 String;
``response
ALTER TABLE my_db.my_table
ADD COLUMN
column2` String
Query id: d5d6bfa1-b80c-4d9f-8dcd-d13e7bd401a5
Ok.
0 rows in set. Elapsed: 0.010 sec.
```
sql
DESCRIBE my_db.my_table;
```response
DESCRIBE TABLE my_db.my_table
Query id: ab9cb2d0-5b1a-42e1-bc9c-c7ff351cb272
ββnameβββββ¬βtypeββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β id β UInt64 β β β β β β
β column1 β String β β β β β β
β column2 β String β β β β β β
βββββββββββ΄βββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
```
Test deleting a column
sql
ALTER TABLE my_db.my_table DROP COLUMN column2;
```response
ALTER TABLE my_db.my_table
DROP COLUMN column2
Query id: 50ad5f6b-f64b-4c96-8f5f-ace87cea6c47
0 rows in set. Elapsed: 0.004 sec.
Received exception from server (version 22.5.1):
Code: 497. DB::Exception: Received from chnode1.marsnet.local:9440. DB::Exception: my_user: Not enough privileges. To execute this query it's necessary to have grant ALTER DROP COLUMN(column2) ON my_db.my_table. (ACCESS_DENIED)
```
Testing the alter admin by granting the permission
sql
GRANT SELECT, ALTER COLUMN ON my_db.my_table TO my_alter_admin WITH GRANT OPTION;
Log in with the alter admin user
bash
clickhouse-client --user my_alter_admin --password password --port 9000 --host <my_clickhouse_host>
Grant a sub-privilege
sql
GRANT ALTER ADD COLUMN ON my_db.my_table TO my_user;
```response
GRANT ALTER ADD COLUMN ON my_db.my_table TO my_user
Query id: 1c7622fa-9df1-4c54-9fc3-f984c716aeba
Ok.
```
Test granting a privilege that the alter admin user does not have is not a sub privilege of the grants for the admin user.
sql
GRANT ALTER UPDATE ON my_db.my_table TO my_user;
```response
GRANT ALTER UPDATE ON my_db.my_table TO my_user
Query id: 191690dc-55a6-4625-8fee-abc3d14a5545
0 rows in set. Elapsed: 0.004 sec. | {"source_file": "index.md"} | [
0.027474088594317436,
-0.018824387341737747,
-0.05167321860790253,
0.0052390871569514275,
-0.11639780551195145,
-0.02765413001179695,
0.043792035430669785,
0.024577448144555092,
-0.04202771931886673,
0.05728665366768837,
0.00837213359773159,
-0.046169742941856384,
0.05786953493952751,
-0.0... |
5e1a34e1-6707-4211-b46b-1435958344b3 | ```response
GRANT ALTER UPDATE ON my_db.my_table TO my_user
Query id: 191690dc-55a6-4625-8fee-abc3d14a5545
0 rows in set. Elapsed: 0.004 sec.
Received exception from server (version 22.5.1):
Code: 497. DB::Exception: Received from chnode1.marsnet.local:9440. DB::Exception: my_alter_admin: Not enough privileges. To execute this query it's necessary to have grant ALTER UPDATE ON my_db.my_table WITH GRANT OPTION. (ACCESS_DENIED)
```
Summary
The ALTER privileges are hierarchical for
ALTER
with tables and views but not for other
ALTER
statements. The permissions can be set in granular level or by grouping of permissions and also revoked similarly. The user granting or revoking must have
WITH GRANT OPTION
to set privileges on users, including the acting user themselves, and must have the privilege already. The acting user cannot revoke their own privileges if they do not have the grant option privilege themselves. | {"source_file": "index.md"} | [
-0.02785639837384224,
0.013022871688008308,
0.024866780266165733,
0.050147973001003265,
-0.07516495883464813,
-0.009864351712167263,
0.029547708109021187,
-0.010981331579387188,
-0.013188255950808525,
0.07333558797836304,
0.009408815763890743,
-0.031162984669208527,
0.06593650579452515,
-0... |
3c1f0e18-2b55-4b4c-8ca8-360617ee4e1a | sidebar_label: 'Configuring LDAP'
sidebar_position: 2
slug: /guides/sre/configuring-ldap
title: 'Configuring ClickHouse to Use LDAP for Authentication and Role Mapping'
description: 'Describes how to configure ClickHouse to use LDAP for authentication and role mapping'
keywords: ['LDAP configuration', 'LDAP authentication', 'role mapping', 'user management', 'SRE guide']
doc_type: 'guide'
import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md';
Configuring ClickHouse to use LDAP for authentication and role mapping
ClickHouse can be configured to use LDAP to authenticate ClickHouse database users. This guide provides a simple example of integrating ClickHouse with an LDAP system authenticating to a publicly available directory.
1. Configure LDAP connection settings in ClickHouse {#1-configure-ldap-connection-settings-in-clickhouse}
Test your connection to this public LDAP server:
bash
$ ldapsearch -x -b dc=example,dc=com -H ldap://ldap.forumsys.com
The reply will be something like this:
```response
extended LDIF
LDAPv3
base
with scope subtree
filter: (objectclass=*)
requesting: ALL
example.com
dn: dc=example,dc=com
objectClass: top
objectClass: dcObject
objectClass: organization
o: example.com
dc: example
...
```
Edit the
config.xml
file and add the following to configure LDAP:
xml
<ldap_servers>
<test_ldap_server>
<host>ldap.forumsys.com</host>
<port>389</port>
<bind_dn>uid={user_name},dc=example,dc=com</bind_dn>
<enable_tls>no</enable_tls>
<tls_require_cert>never</tls_require_cert>
</test_ldap_server>
</ldap_servers>
:::note
The
<test_ldap_server>
tags is an arbitrary label to identify a particular LDAP server.
:::
These are the basic settings used above:
|Parameter |Description |Example |
|----------|------------------------------|---------------------|
|host |hostname or IP of LDAP server |ldap.forumsys.com |
|port |directory port for LDAP server|389 |
|bind_dn |template path to users |
uid={user_name},dc=example,dc=com
|
|enable_tls|whether to use secure ldap |no |
|tls_require_cert |whether to require certificate for connection|never|
:::note
In this example, since the public server uses 389 and does not use a secure port, we disable TLS for demonstration purposes.
:::
:::note
View the
LDAP doc page
for more details on the LDAP settings.
::: | {"source_file": "configuring-ldap.md"} | [
0.008897651918232441,
-0.047917820513248444,
-0.07091935724020004,
0.017450764775276184,
-0.02091914974153042,
-0.023216959089040756,
0.04485448822379112,
0.034518051892519,
-0.1518697291612625,
-0.04395120218396187,
0.08440535515546799,
-0.028958875685930252,
0.10028468817472458,
-0.02234... |
88285390-1acf-4a34-81d8-72e7f6bed2e3 | :::note
View the
LDAP doc page
for more details on the LDAP settings.
:::
Add the
<ldap>
section to
<user_directories>
section to configure the user role mapping. This section defines when a user is authenticated and what role the user will receive. In this basic example, any user authenticating to LDAP will receive the
scientists_role
which will be defined at a later step in ClickHouse. The section should look similar to this:
xml
<user_directories>
<users_xml>
<path>users.xml</path>
</users_xml>
<local_directory>
<path>/var/lib/clickhouse/access/</path>
</local_directory>
<ldap>
<server>test_ldap_server</server>
<roles>
<scientists_role />
</roles>
<role_mapping>
<base_dn>dc=example,dc=com</base_dn>
<search_filter>(&(objectClass=groupOfUniqueNames)(uniqueMember={bind_dn}))</search_filter>
<attribute>cn</attribute>
</role_mapping>
</ldap>
</user_directories>
These are the basic settings used above:
|Parameter |Description |Example |
|----------|------------------------------|---------------------|
|server |label defined in the prior ldap_servers section|test_ldap_server|
|roles |name of the roles defined in ClickHouse the users will be mapped to|scientists_role|
|base_dn |base path to start search for groups with user |dc=example,dc=com|
|search_filter|ldap search filter to identify groups to select for mapping users |
(&(objectClass=groupOfUniqueNames)(uniqueMember={bind_dn}))
|
|attribute |which attribute name should value be returned from|cn|
Restart your ClickHouse server to apply the settings.
2. Configure ClickHouse database roles and permissions {#2-configure-clickhouse-database-roles-and-permissions}
:::note
The procedures in this section assumes that SQL Access Control and Account Management in ClickHouse has been enabled. To enable, view the
SQL Users and Roles guide
.
:::
Create a role in clickhouse with the same name used in the role mapping section of the
config.xml
file
sql
CREATE ROLE scientists_role;
Grant needed privileges to the role. The following statement grants admin privileges to any user able to authenticate through LDAP:
sql
GRANT ALL ON *.* TO scientists_role;
3. Test the LDAP configuration {#3-test-the-ldap-configuration}
Login using the ClickHouse client
```bash
$ clickhouse-client --user einstein --password password
ClickHouse client version 22.2.2.1.
Connecting to localhost:9000 as user einstein.
Connected to ClickHouse server version 22.2.2 revision 54455.
chnode1 :)
```
:::note
Use the
ldapsearch
command in step 1 to view all of the users available in the directory and for all of the users the password is
password
::: | {"source_file": "configuring-ldap.md"} | [
0.019729360938072205,
-0.011240349151194096,
-0.04153687134385109,
0.008884533308446407,
-0.04772574454545975,
-0.026794027537107468,
0.05782660096883774,
-0.010058490559458733,
-0.11909878998994827,
-0.06158910319209099,
0.05615702643990517,
-0.07903942465782166,
0.05340060964226723,
0.00... |
41e16cbb-4823-45d4-acf6-f539e3081bba | chnode1 :)
```
:::note
Use the
ldapsearch
command in step 1 to view all of the users available in the directory and for all of the users the password is
password
:::
Test that the user was mapped correctly to the
scientists_role
role and has admin permissions
sql
SHOW DATABASES
```response
Query id: 93b785ff-1482-4eda-95b0-b2d68b2c5e0f
ββnameββββββββββββββββ
β INFORMATION_SCHEMA β
β db1_mysql β
β db2 β
β db3 β
β db4_mysql β
β db5_merge β
β default β
β information_schema β
β system β
ββββββββββββββββββββββ
9 rows in set. Elapsed: 0.004 sec.
```
Summary {#summary}
This article demonstrated the basics of configuring ClickHouse to authenticate to an LDAP server and also to map to a role. There are also options for configuring individual users in ClickHouse but having those users be authenticated by LDAP without configuring automated role mapping. The LDAP module can also be used to connect to Active Directory. | {"source_file": "configuring-ldap.md"} | [
0.04905525967478752,
-0.10350673645734787,
-0.08326658606529236,
0.018172014504671097,
-0.054713405668735504,
-0.011251245625317097,
0.06090954318642616,
0.012872124090790749,
-0.11087261140346527,
-0.0017082010162994266,
0.03433908894658089,
-0.04681191220879555,
0.08383625000715256,
-0.0... |
b659bb0d-2a73-4d6c-a7b8-559c6cb6a228 | description: 'Page describing the
Shared
database engine, available in ClickHouse Cloud'
sidebar_label: 'Shared'
sidebar_position: 10
slug: /engines/database-engines/shared
title: 'Shared'
doc_type: 'reference'
import CloudOnlyBadge from '@theme/badges/CloudOnlyBadge';
Shared database engine
The
Shared
database engine works in conjunction with Shared Catalog to manage databases whose tables use stateless table engines such as
SharedMergeTree
.
These table engines do not write persistent state to disk and are compatible with dynamic compute environments.
The
Shared
database engine in Cloud removes the dependency for local disks.
It is a purely in-memory engine, requiring only CPU and memory.
How does it work? {#how-it-works}
The
Shared
database engine stores all database and table definitions in a central Shared Catalog backed by Keeper. Instead of writing to local disk, it maintains a single versioned global state shared across all compute nodes.
Each node tracks only the last applied version and, on startup, fetches the latest state without the need for local files or manual setup.
Syntax {#syntax}
For end users, using Shared Catalog and the Shared database engine requires no additional configuration. Database creation is the same as always:
sql
CREATE DATABASE my_database;
ClickHouse Cloud automatically assigns the Shared database engine to databases. Any tables created within such a database using stateless engines will automatically benefit from Shared Catalogβs replication and coordination capabilities.
:::tip
For more information on Shared Catalog and it's benefits, see
"Shared catalog and shared database engine"
in the Cloud reference section.
::: | {"source_file": "shared.md"} | [
-0.04210463911294937,
-0.015442061237990856,
-0.030889157205820084,
0.053895652294158936,
0.006420257035642862,
0.015091156587004662,
-0.031355246901512146,
-0.09456833451986313,
-0.01694105565547943,
-0.004122569225728512,
-0.0034807382617145777,
0.046520113945007324,
0.041413478553295135,
... |
f2217c63-cee2-431b-86e6-d4b41372c899 | description: 'The engine is based on the Atomic engine. It supports replication of
metadata via DDL log being written to ZooKeeper and executed on all of the replicas
for a given database.'
sidebar_label: 'Replicated'
sidebar_position: 30
slug: /engines/database-engines/replicated
title: 'Replicated'
doc_type: 'reference'
Replicated
The engine is based on the
Atomic
engine. It supports replication of metadata via DDL log being written to ZooKeeper and executed on all of the replicas for a given database.
One ClickHouse server can have multiple replicated databases running and updating at the same time. But there can't be multiple replicas of the same replicated database.
Creating a database {#creating-a-database}
sql
CREATE DATABASE testdb [UUID '...'] ENGINE = Replicated('zoo_path', 'shard_name', 'replica_name') [SETTINGS ...]
Engine Parameters
zoo_path
β ZooKeeper path. The same ZooKeeper path corresponds to the same database.
shard_name
β Shard name. Database replicas are grouped into shards by
shard_name
.
replica_name
β Replica name. Replica names must be different for all replicas of the same shard.
Parameters can be omitted, in such case missing parameters are substituted with defaults.
If
zoo_path
contains macro
{uuid}
, it is required to specify explicit UUID or add
ON CLUSTER
to create statement to ensure all replicas use the same UUID for this database.
For
ReplicatedMergeTree
tables if no arguments provided, then default arguments are used:
/clickhouse/tables/{uuid}/{shard}
and
{replica}
. These can be changed in the server settings
default_replica_path
and
default_replica_name
. Macro
{uuid}
is unfolded to table's uuid,
{shard}
and
{replica}
are unfolded to values from server config, not from database engine arguments. But in the future, it will be possible to use
shard_name
and
replica_name
of Replicated database.
Specifics and recommendations {#specifics-and-recommendations}
DDL queries with
Replicated
database work in a similar way to
ON CLUSTER
queries, but with minor differences.
First, the DDL request tries to execute on the initiator (the host that originally received the request from the user). If the request is not fulfilled, then the user immediately receives an error, other hosts do not try to fulfill it. If the request has been successfully completed on the initiator, then all other hosts will automatically retry until they complete it. The initiator will try to wait for the query to be completed on other hosts (no longer than
distributed_ddl_task_timeout
) and will return a table with the query execution statuses on each host. | {"source_file": "replicated.md"} | [
-0.023262672126293182,
-0.13358476758003235,
-0.04813087359070778,
-0.007719762623310089,
-0.04858246073126793,
-0.09142826497554779,
-0.025931917130947113,
-0.09634450823068619,
0.0022755321115255356,
-0.006979633588343859,
-0.0028237442020326853,
-0.04900006204843521,
0.13357016444206238,
... |
98dfc319-8985-444d-a34d-e591505cb545 | The behavior in case of errors is regulated by the
distributed_ddl_output_mode
setting, for a
Replicated
database it is better to set it to
null_status_on_timeout
β i.e. if some hosts did not have time to execute the request for
distributed_ddl_task_timeout
, then do not throw an exception, but show the
NULL
status for them in the table.
The
system.clusters
system table contains a cluster named like the replicated database, which consists of all replicas of the database. This cluster is updated automatically when creating/deleting replicas, and it can be used for
Distributed
tables.
When creating a new replica of the database, this replica creates tables by itself. If the replica has been unavailable for a long time and has lagged behind the replication log β it checks its local metadata with the current metadata in ZooKeeper, moves the extra tables with data to a separate non-replicated database (so as not to accidentally delete anything superfluous), creates the missing tables, updates the table names if they have been renamed. The data is replicated at the
ReplicatedMergeTree
level, i.e. if the table is not replicated, the data will not be replicated (the database is responsible only for metadata).
ALTER TABLE FREEZE|ATTACH|FETCH|DROP|DROP DETACHED|DETACH PARTITION|PART
queries are allowed but not replicated. The database engine will only add/fetch/remove the partition/part to the current replica. However, if the table itself uses a Replicated table engine, then the data will be replicated after using
ATTACH
.
In case you need only configure a cluster without maintaining table replication, refer to
Cluster Discovery
feature.
Usage example {#usage-example}
Creating a cluster with three hosts:
sql
node1 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','shard1','replica1');
node2 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','shard1','other_replica');
node3 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','other_shard','{replica}');
Creating database on cluster with implicit parameters:
sql
CREATE DATABASE r ON CLUSTER default ENGINE=Replicated;
Running the DDL-query:
sql
CREATE TABLE r.rmt (n UInt64) ENGINE=ReplicatedMergeTree ORDER BY n;
text
ββββββhostsβββββββββββββ¬ββstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β shard1|replica1 β 0 β β 2 β 0 β
β shard1|other_replica β 0 β β 1 β 0 β
β other_shard|r1 β 0 β β 0 β 0 β
ββββββββββββββββββββββββ΄ββββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
Showing the system table:
sql
SELECT cluster, shard_num, replica_num, host_name, host_address, port, is_local
FROM system.clusters WHERE cluster='r'; | {"source_file": "replicated.md"} | [
0.005362072493880987,
-0.053838230669498444,
0.033990997821092606,
0.06374900043010712,
0.0483718067407608,
-0.11968941986560822,
-0.05652952939271927,
-0.02759266085922718,
0.06159916892647743,
0.03844349458813667,
-0.008425090461969376,
-0.03156883269548416,
0.08920972049236298,
-0.00922... |
024f8e28-e711-4a2d-badc-d6924b677ced | Showing the system table:
sql
SELECT cluster, shard_num, replica_num, host_name, host_address, port, is_local
FROM system.clusters WHERE cluster='r';
text
ββclusterββ¬βshard_numββ¬βreplica_numββ¬βhost_nameββ¬βhost_addressββ¬βportββ¬βis_localββ
β r β 1 β 1 β node3 β 127.0.0.1 β 9002 β 0 β
β r β 2 β 1 β node2 β 127.0.0.1 β 9001 β 0 β
β r β 2 β 2 β node1 β 127.0.0.1 β 9000 β 1 β
βββββββββββ΄ββββββββββββ΄ββββββββββββββ΄ββββββββββββ΄βββββββββββββββ΄βββββββ΄βββββββββββ
Creating a distributed table and inserting the data:
sql
node2 :) CREATE TABLE r.d (n UInt64) ENGINE=Distributed('r','r','rmt', n % 2);
node3 :) INSERT INTO r.d SELECT * FROM numbers(10);
node1 :) SELECT materialize(hostName()) AS host, groupArray(n) FROM r.d GROUP BY host;
text
ββhostsββ¬βgroupArray(n)ββ
β node3 β [1,3,5,7,9] β
β node2 β [0,2,4,6,8] β
βββββββββ΄ββββββββββββββββ
Adding replica on the one more host:
sql
node4 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','other_shard','r2');
Adding replica on the one more host if macro
{uuid}
is used in
zoo_path
:
sql
node1 :) SELECT uuid FROM system.databases WHERE database='r';
node4 :) CREATE DATABASE r UUID '<uuid from previous query>' ENGINE=Replicated('some/path/{uuid}','other_shard','r2');
The cluster configuration will look like this:
text
ββclusterββ¬βshard_numββ¬βreplica_numββ¬βhost_nameββ¬βhost_addressββ¬βportββ¬βis_localββ
β r β 1 β 1 β node3 β 127.0.0.1 β 9002 β 0 β
β r β 1 β 2 β node4 β 127.0.0.1 β 9003 β 0 β
β r β 2 β 1 β node2 β 127.0.0.1 β 9001 β 0 β
β r β 2 β 2 β node1 β 127.0.0.1 β 9000 β 1 β
βββββββββββ΄ββββββββββββ΄ββββββββββββββ΄ββββββββββββ΄βββββββββββββββ΄βββββββ΄βββββββββββ
The distributed table also will get data from the new host:
sql
node2 :) SELECT materialize(hostName()) AS host, groupArray(n) FROM r.d GROUP BY host;
text
ββhostsββ¬βgroupArray(n)ββ
β node2 β [1,3,5,7,9] β
β node4 β [0,2,4,6,8] β
βββββββββ΄ββββββββββββββββ
Settings {#settings}
The following settings are supported: | {"source_file": "replicated.md"} | [
0.04537137970328331,
0.004123948980122805,
-0.03409184515476227,
0.05581512674689293,
0.019570181146264076,
-0.0677751824259758,
-0.01325285155326128,
-0.06647282093763351,
-0.05753336846828461,
0.03728323057293892,
0.026100054383277893,
-0.07159259170293808,
0.08557095378637314,
-0.092570... |
66e6801c-0e26-4808-b768-6b4ce221fafc | | Setting | Default | Description |
|------------------------------------------------------------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
max_broken_tables_ratio
| 1 | Do not recover replica automatically if the ratio of staled tables to all tables is greater |
|
max_replication_lag_to_enqueue
| 50 | Replica will throw exception on attempt to execute query if its replication lag greater |
|
wait_entry_commited_timeout_sec
| 3600 | Replicas will try to cancel query if timeout exceed, but initiator host has not executed it yet |
|
collection_name
| | A name of a collection defined in server's config where all info for cluster authentication is defined |
|
check_consistency
| true | Check consistency of local metadata and metadata in Keeper, do replica recovery on inconsistency |
|
max_retries_before_automatic_recovery
| 10 | Max number of attempts to execute a queue entry before marking replica as lost recovering it from snapshot (0 means infinite) |
|
allow_skipping_old_temporary_tables_ddls_of_refreshable_materialized_views
| false | If enabled, when processing DDLs in Replicated databases, it skips creating and exchanging DDLs of the temporary tables of refreshable materialized views if possible |
|
logs_to_keep
| 1000 | Default number of logs to keep in ZooKeeper for Replicated database. |
|
default_replica_path
|
/clickhouse/databases/{uuid} | {"source_file": "replicated.md"} | [
0.027377596125006676,
0.02894790656864643,
-0.006911426782608032,
-0.002339729806408286,
-0.05832530930638313,
0.07408155500888824,
0.011692257598042488,
0.04591621086001396,
0.04093393683433533,
-0.05727342143654823,
0.017689010128378868,
-0.03912276029586792,
-0.03153540566563606,
-0.041... |
fd497b8b-7e53-402b-a273-2385ba372f20 | |
default_replica_path
|
/clickhouse/databases/{uuid}
| The path to the database in ZooKeeper. Used during database creation if arguments are omitted. |
|
default_replica_shard_name
|
{shard}
| The shard name of the replica in the database. Used during database creation if arguments are omitted. |
|
default_replica_name
|
{replica}
| The name of the replica in the database. Used during database creation if arguments are omitted. | | {"source_file": "replicated.md"} | [
0.0016264664009213448,
-0.02377801015973091,
-0.08483948558568954,
0.002516314620152116,
0.015627505257725716,
-0.047589585185050964,
0.014202073216438293,
-0.042843934148550034,
-0.011481499299407005,
0.005616045091301203,
0.022933267056941986,
-0.03156536445021629,
0.05399826914072037,
0... |
5c886dc1-186f-42e2-b045-d0f508d2226f | Default values may be overwritten in the configuration file
xml
<clickhouse>
<database_replicated>
<max_broken_tables_ratio>0.75</max_broken_tables_ratio>
<max_replication_lag_to_enqueue>100</max_replication_lag_to_enqueue>
<wait_entry_commited_timeout_sec>1800</wait_entry_commited_timeout_sec>
<collection_name>postgres1</collection_name>
<check_consistency>false</check_consistency>
<max_retries_before_automatic_recovery>5</max_retries_before_automatic_recovery>
<default_replica_path>/clickhouse/databases/{uuid}</default_replica_path>
<default_replica_shard_name>{shard}</default_replica_shard_name>
<default_replica_name>{replica}</default_replica_name>
</database_replicated>
</clickhouse> | {"source_file": "replicated.md"} | [
0.042053669691085815,
-0.09872181713581085,
-0.058748092502355576,
0.0016382770845666528,
-0.04553963989019394,
-0.05870808660984039,
-0.09555838257074356,
-0.034865524619817734,
-0.01563066802918911,
0.01911778561770916,
0.06417607516050339,
-0.014438767917454243,
-0.011577156372368336,
-... |
6b2d5f4b-3dc3-46bf-ac5a-32ad2fb6322a | description: 'Allows connecting to databases on a remote MySQL server and perform
INSERT
and
SELECT
queries to exchange data between ClickHouse and MySQL.'
sidebar_label: 'MySQL'
sidebar_position: 50
slug: /engines/database-engines/mysql
title: 'MySQL'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
MySQL database engine
Allows to connect to databases on a remote MySQL server and perform
INSERT
and
SELECT
queries to exchange data between ClickHouse and MySQL.
The
MySQL
database engine translate queries to the MySQL server so you can perform operations such as
SHOW TABLES
or
SHOW CREATE TABLE
.
You cannot perform the following queries:
RENAME
CREATE TABLE
ALTER
Creating a database {#creating-a-database}
sql
CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster]
ENGINE = MySQL('host:port', ['database' | database], 'user', 'password')
Engine Parameters
host:port
β MySQL server address.
database
β Remote database name.
user
β MySQL user.
password
β User password.
Data types support {#data_types-support}
| MySQL | ClickHouse |
|----------------------------------|--------------------------------------------------------------|
| UNSIGNED TINYINT |
UInt8
|
| TINYINT |
Int8
|
| UNSIGNED SMALLINT |
UInt16
|
| SMALLINT |
Int16
|
| UNSIGNED INT, UNSIGNED MEDIUMINT |
UInt32
|
| INT, MEDIUMINT |
Int32
|
| UNSIGNED BIGINT |
UInt64
|
| BIGINT |
Int64
|
| FLOAT |
Float32
|
| DOUBLE |
Float64
|
| DATE |
Date
|
| DATETIME, TIMESTAMP |
DateTime
|
| BINARY |
FixedString
|
All other MySQL data types are converted into
String
.
Nullable
is supported.
Global variables support {#global-variables-support}
For better compatibility you may address global variables in MySQL style, as
@@identifier
.
These variables are supported:
-
version
-
max_allowed_packet
:::note
By now these variables are stubs and don't correspond to anything.
:::
Example:
sql
SELECT @@version;
Examples of use {#examples-of-use}
Table in MySQL:
```text
mysql> USE test;
Database changed
mysql> CREATE TABLE
mysql_table
(
->
int_id
INT NOT NULL AUTO_INCREMENT,
->
float
FLOAT NOT NULL,
-> PRIMARY KEY (
int_id
));
Query OK, 0 rows affected (0,09 sec)
mysql> insert into mysql_table (
int_id
,
float
) VALUES (1,2);
Query OK, 1 row affected (0,00 sec) | {"source_file": "mysql.md"} | [
0.024227766320109367,
-0.07245464622974396,
-0.04651380330324173,
0.054550640285015106,
-0.10403606295585632,
-0.015753088518977165,
0.02398814633488655,
0.00003224005195079371,
-0.0186271034181118,
0.024680644273757935,
0.044681400060653687,
-0.032166238874197006,
0.18151699006557465,
-0.... |
67f4e416-62ae-4063-b5fc-1551550e3007 | mysql> insert into mysql_table (
int_id
,
float
) VALUES (1,2);
Query OK, 1 row affected (0,00 sec)
mysql> select * from mysql_table;
+------+-----+
| int_id | value |
+------+-----+
| 1 | 2 |
+------+-----+
1 row in set (0,00 sec)
```
Database in ClickHouse, exchanging data with the MySQL server:
sql
CREATE DATABASE mysql_db ENGINE = MySQL('localhost:3306', 'test', 'my_user', 'user_password') SETTINGS read_write_timeout=10000, connect_timeout=100;
sql
SHOW DATABASES
text
ββnameββββββ
β default β
β mysql_db β
β system β
ββββββββββββ
sql
SHOW TABLES FROM mysql_db
text
ββnameββββββββββ
β mysql_table β
ββββββββββββββββ
sql
SELECT * FROM mysql_db.mysql_table
text
ββint_idββ¬βvalueββ
β 1 β 2 β
ββββββββββ΄ββββββββ
sql
INSERT INTO mysql_db.mysql_table VALUES (3,4)
sql
SELECT * FROM mysql_db.mysql_table
text
ββint_idββ¬βvalueββ
β 1 β 2 β
β 3 β 4 β
ββββββββββ΄ββββββββ | {"source_file": "mysql.md"} | [
0.08167193084955215,
-0.07239693403244019,
-0.02436610870063305,
0.030504802241921425,
-0.1135328859090805,
-0.059229835867881775,
0.040912847965955734,
0.028507765382528305,
-0.01760348491370678,
0.035573773086071014,
0.10026424378156662,
-0.07324258983135223,
0.16397064924240112,
-0.0823... |
0643c5c0-94cd-4a13-b9e5-a7bcdf4f78b0 | description: 'Creates a ClickHouse database with tables from PostgreSQL database.'
sidebar_label: 'MaterializedPostgreSQL'
sidebar_position: 60
slug: /engines/database-engines/materialized-postgresql
title: 'MaterializedPostgreSQL'
doc_type: 'reference'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
MaterializedPostgreSQL
:::note
ClickHouse Cloud users are recommended to use
ClickPipes
for PostgreSQL replication to ClickHouse. This natively supports high-performance Change Data Capture (CDC) for PostgreSQL.
:::
Creates a ClickHouse database with tables from PostgreSQL database. Firstly, database with engine
MaterializedPostgreSQL
creates a snapshot of PostgreSQL database and loads required tables. Required tables can include any subset of tables from any subset of schemas from specified database. Along with the snapshot database engine acquires LSN and once initial dump of tables is performed - it starts pulling updates from WAL. After database is created, newly added tables to PostgreSQL database are not automatically added to replication. They have to be added manually with
ATTACH TABLE db.table
query.
Replication is implemented with PostgreSQL Logical Replication Protocol, which does not allow to replicate DDL, but allows to know whether replication breaking changes happened (column type changes, adding/removing columns). Such changes are detected and according tables stop receiving updates. In this case you should use
ATTACH
/
DETACH PERMANENTLY
queries to reload table completely. If DDL does not break replication (for example, renaming a column) table will still receive updates (insertion is done by position).
:::note
This database engine is experimental. To use it, set
allow_experimental_database_materialized_postgresql
to 1 in your configuration files or by using the
SET
command:
sql
SET allow_experimental_database_materialized_postgresql=1
:::
Creating a database {#creating-a-database}
sql
CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster]
ENGINE = MaterializedPostgreSQL('host:port', 'database', 'user', 'password') [SETTINGS ...]
Engine Parameters
host:port
β PostgreSQL server endpoint.
database
β PostgreSQL database name.
user
β PostgreSQL user.
password
β User password.
Example of use {#example-of-use}
```sql
CREATE DATABASE postgres_db
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password');
SHOW TABLES FROM postgres_db;
ββnameββββ
β table1 β
ββββββββββ
SELECT * FROM postgresql_db.postgres_table;
```
Dynamically adding new tables to replication {#dynamically-adding-table-to-replication}
After
MaterializedPostgreSQL
database is created, it does not automatically detect new tables in according PostgreSQL database. Such tables can be added manually:
sql
ATTACH TABLE postgres_database.new_table; | {"source_file": "materialized-postgresql.md"} | [
-0.03780871629714966,
-0.08838741481304169,
-0.02966592088341713,
0.003879041876643896,
-0.03319094330072403,
-0.027090832591056824,
-0.026778819039463997,
-0.07614835351705551,
-0.02148463949561119,
0.048102907836437225,
-0.018088145181536674,
0.01644103415310383,
0.03588777780532837,
-0.... |
27259a51-da8c-4ae1-a17f-59944fb72cab | sql
ATTACH TABLE postgres_database.new_table;
:::warning
Before version 22.1, adding a table to replication left a non-removed temporary replication slot (named
{db_name}_ch_replication_slot_tmp
). If attaching tables in ClickHouse version before 22.1, make sure to delete it manually (
SELECT pg_drop_replication_slot('{db_name}_ch_replication_slot_tmp')
). Otherwise disk usage will grow. This issue is fixed in 22.1.
:::
Dynamically removing tables from replication {#dynamically-removing-table-from-replication}
It is possible to remove specific tables from replication:
sql
DETACH TABLE postgres_database.table_to_remove PERMANENTLY;
PostgreSQL schema {#schema}
PostgreSQL
schema
can be configured in 3 ways (starting from version 21.12).
One schema for one
MaterializedPostgreSQL
database engine. Requires to use setting
materialized_postgresql_schema
.
Tables are accessed via table name only:
```sql
CREATE DATABASE postgres_database
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password')
SETTINGS materialized_postgresql_schema = 'postgres_schema';
SELECT * FROM postgres_database.table1;
```
Any number of schemas with specified set of tables for one
MaterializedPostgreSQL
database engine. Requires to use setting
materialized_postgresql_tables_list
. Each table is written along with its schema.
Tables are accessed via schema name and table name at the same time:
```sql
CREATE DATABASE database1
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password')
SETTINGS materialized_postgresql_tables_list = 'schema1.table1,schema2.table2,schema1.table3',
materialized_postgresql_tables_list_with_schema = 1;
SELECT * FROM database1.
schema1.table1
;
SELECT * FROM database1.
schema2.table2
;
```
But in this case all tables in
materialized_postgresql_tables_list
must be written with its schema name.
Requires
materialized_postgresql_tables_list_with_schema = 1
.
Warning: for this case dots in table name are not allowed.
Any number of schemas with full set of tables for one
MaterializedPostgreSQL
database engine. Requires to use setting
materialized_postgresql_schema_list
.
```sql
CREATE DATABASE database1
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password')
SETTINGS materialized_postgresql_schema_list = 'schema1,schema2,schema3';
SELECT * FROM database1.
schema1.table1
;
SELECT * FROM database1.
schema1.table2
;
SELECT * FROM database1.
schema2.table2
;
```
Warning: for this case dots in table name are not allowed.
Requirements {#requirements}
The
wal_level
setting must have a value
logical
and
max_replication_slots
parameter must have a value at least
2
in the PostgreSQL config file.
Each replicated table must have one of the following
replica identity
:
primary key (by default)
index | {"source_file": "materialized-postgresql.md"} | [
0.05008324608206749,
-0.05188287794589996,
0.008588351309299469,
0.01340104267001152,
-0.03899663686752319,
-0.06840982288122177,
0.017978468909859657,
-0.08367394655942917,
0.0005081165581941605,
0.09115859866142273,
0.06796299666166306,
0.04630144685506821,
0.05818638950586319,
-0.001241... |
98a1dc37-d2b6-4a37-b872-1cf79903cadc | Each replicated table must have one of the following
replica identity
:
primary key (by default)
index
bash
postgres# CREATE TABLE postgres_table (a Integer NOT NULL, b Integer, c Integer NOT NULL, d Integer, e Integer NOT NULL);
postgres# CREATE unique INDEX postgres_table_index on postgres_table(a, c, e);
postgres# ALTER TABLE postgres_table REPLICA IDENTITY USING INDEX postgres_table_index;
The primary key is always checked first. If it is absent, then the index, defined as replica identity index, is checked.
If the index is used as a replica identity, there has to be only one such index in a table.
You can check what type is used for a specific table with the following command:
bash
postgres# SELECT CASE relreplident
WHEN 'd' THEN 'default'
WHEN 'n' THEN 'nothing'
WHEN 'f' THEN 'full'
WHEN 'i' THEN 'index'
END AS replica_identity
FROM pg_class
WHERE oid = 'postgres_table'::regclass;
:::note
Replication of
TOAST
values is not supported. The default value for the data type will be used.
:::
Settings {#settings}
materialized_postgresql_tables_list
{#materialized-postgresql-tables-list}
Sets a comma-separated list of PostgreSQL database tables, which will be replicated via [MaterializedPostgreSQL](../../engines/database-engines/materialized-postgresql.md) database engine.
Each table can have subset of replicated columns in brackets. If subset of columns is omitted, then all columns for table will be replicated.
```sql
materialized_postgresql_tables_list = 'table1(co1, col2),table2,table3(co3, col5, col7)
```
Default value: empty list β means whole PostgreSQL database will be replicated.
materialized_postgresql_schema
{#materialized-postgresql-schema}
Default value: empty string. (Default schema is used)
materialized_postgresql_schema_list
{#materialized-postgresql-schema-list}
Default value: empty list. (Default schema is used)
materialized_postgresql_max_block_size
{#materialized-postgresql-max-block-size}
Sets the number of rows collected in memory before flushing data into PostgreSQL database table.
Possible values:
- Positive integer.
Default value: `65536`.
materialized_postgresql_replication_slot
{#materialized-postgresql-replication-slot}
A user-created replication slot. Must be used together with `materialized_postgresql_snapshot`.
materialized_postgresql_snapshot
{#materialized-postgresql-snapshot}
A text string identifying a snapshot, from which [initial dump of PostgreSQL tables](../../engines/database-engines/materialized-postgresql.md) will be performed. Must be used together with `materialized_postgresql_replication_slot`.
```sql
CREATE DATABASE database1
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password')
SETTINGS materialized_postgresql_tables_list = 'table1,table2,table3';
SELECT * FROM database1.table1;
``` | {"source_file": "materialized-postgresql.md"} | [
-0.0013581918319687247,
-0.04121336340904236,
-0.032662853598594666,
0.009531169198453426,
-0.06080459803342819,
0.0014741302002221346,
0.06689488887786865,
-0.08433344960212708,
0.05084706470370293,
-0.0010655713267624378,
0.005545048508793116,
0.0075286817736923695,
-0.0061019472777843475,... |
4304bdfb-f7dd-4aa3-b9d3-d96c4ff01eb5 | SELECT * FROM database1.table1;
```
The settings can be changed, if necessary, using a DDL query. But it is impossible to change the setting `materialized_postgresql_tables_list`. To update the list of tables in this setting use the `ATTACH TABLE` query.
```sql
ALTER DATABASE postgres_database MODIFY SETTING materialized_postgresql_max_block_size = <new_size>;
```
materialized_postgresql_use_unique_replication_consumer_identifier
{#materialized_postgresql_use_unique_replication_consumer_identifier}
Use a unique replication consumer identifier for replication. Default:
0
.
If set to
1
, allows to setup several
MaterializedPostgreSQL
tables pointing to the same
PostgreSQL
table.
Notes {#notes}
Failover of the logical replication slot {#logical-replication-slot-failover}
Logical Replication Slots which exist on the primary are not available on standby replicas.
So if there is a failover, new primary (the old physical standby) won't be aware of any slots which were existing with old primary. This will lead to a broken replication from PostgreSQL.
A solution to this is to manage replication slots yourself and define a permanent replication slot (some information can be found
here
). You'll need to pass slot name via
materialized_postgresql_replication_slot
setting, and it has to be exported with
EXPORT SNAPSHOT
option. The snapshot identifier needs to be passed via
materialized_postgresql_snapshot
setting.
Please note that this should be used only if it is actually needed. If there is no real need for that or full understanding why, then it is better to allow the table engine to create and manage its own replication slot.
Example (from
@bchrobot
)
Configure replication slot in PostgreSQL.
yaml
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
name: acid-demo-cluster
spec:
numberOfInstances: 2
postgresql:
parameters:
wal_level: logical
patroni:
slots:
clickhouse_sync:
type: logical
database: demodb
plugin: pgoutput
Wait for replication slot to be ready, then begin a transaction and export the transaction snapshot identifier:
sql
BEGIN;
SELECT pg_export_snapshot();
In ClickHouse create database:
sql
CREATE DATABASE demodb
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password')
SETTINGS
materialized_postgresql_replication_slot = 'clickhouse_sync',
materialized_postgresql_snapshot = '0000000A-0000023F-3',
materialized_postgresql_tables_list = 'table1,table2,table3';
End the PostgreSQL transaction once replication to ClickHouse DB is confirmed. Verify that replication continues after failover:
bash
kubectl exec acid-demo-cluster-0 -c postgres -- su postgres -c 'patronictl failover --candidate acid-demo-cluster-1 --force'
Required permissions {#required-permissions}
CREATE PUBLICATION
-- create query privilege. | {"source_file": "materialized-postgresql.md"} | [
-0.004913369193673134,
-0.09237676858901978,
-0.03959774598479271,
0.011549334041774273,
-0.07740303128957748,
-0.01827327534556389,
-0.03532187640666962,
-0.016525376588106155,
-0.0550394207239151,
0.02868468314409256,
0.021115479990839958,
0.032312337309122086,
0.019227048382163048,
-0.0... |
7b7dc95e-7935-49dc-9ab1-0f43ac60e24e | Required permissions {#required-permissions}
CREATE PUBLICATION
-- create query privilege.
CREATE_REPLICATION_SLOT
-- replication privilege.
pg_drop_replication_slot
-- replication privilege or superuser.
DROP PUBLICATION
-- owner of publication (
username
in MaterializedPostgreSQL engine itself).
It is possible to avoid executing
2
and
3
commands and having those permissions. Use settings
materialized_postgresql_replication_slot
and
materialized_postgresql_snapshot
. But with much care.
Access to tables:
pg_publication
pg_replication_slots
pg_publication_tables | {"source_file": "materialized-postgresql.md"} | [
0.0401424802839756,
-0.03122572973370552,
-0.09816116094589233,
0.006490154657512903,
-0.11627977341413498,
-0.03764393925666809,
-0.06277360767126083,
-0.05925514176487923,
-0.035663582384586334,
0.0728677436709404,
-0.006611701566725969,
-0.03063875064253807,
0.04746360331773758,
-0.0303... |
f371aa05-d7b7-4298-af85-47a244eb4624 | description: 'The
Atomic
engine supports non-blocking
DROP TABLE
and
RENAME TABLE
queries, and atomic
EXCHANGE TABLES
queries. The
Atomic
database engine is used
by default.'
sidebar_label: 'Atomic'
sidebar_position: 10
slug: /engines/database-engines/atomic
title: 'Atomic'
doc_type: 'reference'
Atomic
The
Atomic
engine supports non-blocking
DROP TABLE
and
RENAME TABLE
queries, and atomic
EXCHANGE TABLES
queries. The
Atomic
database engine is used by default in open-source ClickHouse.
:::note
On ClickHouse Cloud, the
Shared
database engine
is used by default and also supports
the above mentioned operations.
:::
Creating a database {#creating-a-database}
sql
CREATE DATABASE test [ENGINE = Atomic] [SETTINGS disk=...];
Specifics and recommendations {#specifics-and-recommendations}
Table UUID {#table-uuid}
Each table in the
Atomic
database has a persistent
UUID
and stores its data in the following directory:
text
/clickhouse_path/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/
Where
xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy
is the UUID of the table.
By default, the UUID is generated automatically. However, users can explicitly specify the UUID when creating a table, though this is not recommended.
For example:
sql
CREATE TABLE name UUID '28f1c61c-2970-457a-bffe-454156ddcfef' (n UInt64) ENGINE = ...;
:::note
You can use the
show_table_uuid_in_table_create_query_if_not_nil
setting to display the UUID with the
SHOW CREATE
query.
:::
RENAME TABLE {#rename-table}
RENAME
queries do not modify the UUID or move table data. These queries execute immediately and do not wait for other queries that are using the table to complete.
DROP/DETACH TABLE {#drop-detach-table}
When using
DROP TABLE
, no data is removed. The
Atomic
engine just marks the table as dropped by moving it's metadata to
/clickhouse_path/metadata_dropped/
and notifies the background thread. The delay before the final table data deletion is specified by the
database_atomic_delay_before_drop_table_sec
setting.
You can specify synchronous mode using
SYNC
modifier. Use the
database_atomic_wait_for_drop_and_detach_synchronously
setting to do this. In this case
DROP
waits for running
SELECT
,
INSERT
and other queries which are using the table to finish. The table will be removed when it's not in use.
EXCHANGE TABLES/DICTIONARIES {#exchange-tables}
The
EXCHANGE
query swaps tables or dictionaries atomically. For instance, instead of this non-atomic operation:
sql title="Non-atomic"
RENAME TABLE new_table TO tmp, old_table TO new_table, tmp TO old_table;
you can use an atomic one:
sql title="Atomic"
EXCHANGE TABLES new_table AND old_table;
ReplicatedMergeTree in atomic database {#replicatedmergetree-in-atomic-database} | {"source_file": "atomic.md"} | [
-0.041126981377601624,
-0.06352085620164871,
-0.04182762652635574,
0.05937795341014862,
-0.02357902005314827,
-0.039227161556482315,
0.0675991028547287,
-0.018558038398623466,
-0.02363904006779194,
0.018112273886799812,
0.07817300409078598,
-0.022952059283852577,
0.07047241926193237,
-0.06... |
c5ca21ca-02bc-45e1-824e-147b94a7b96c | you can use an atomic one:
sql title="Atomic"
EXCHANGE TABLES new_table AND old_table;
ReplicatedMergeTree in atomic database {#replicatedmergetree-in-atomic-database}
For
ReplicatedMergeTree
tables, it is recommended not to specify the engine parameters for the path in ZooKeeper and the replica name. In this case, the configuration parameters
default_replica_path
and
default_replica_name
will be used. If you want to specify engine parameters explicitly, it is recommended to use the
{uuid}
macros. This ensures that unique paths are automatically generated for each table in ZooKeeper.
Metadata disk {#metadata-disk}
When
disk
is specified in
SETTINGS
, the disk is used to store table metadata files.
For example:
sql
CREATE TABLE db (n UInt64) ENGINE = Atomic SETTINGS disk=disk(type='local', path='/var/lib/clickhouse-disks/db_disk');
If unspecified, the disk defined in
database_disk.disk
is used by default.
See also {#see-also}
system.databases
system table | {"source_file": "atomic.md"} | [
-0.014966213144361973,
-0.040190406143665314,
-0.04181328043341637,
0.012188898399472237,
-0.0324750579893589,
-0.10044527053833008,
0.020575057715177536,
0.02119855023920536,
-0.03096410445868969,
0.06628163903951645,
-0.005293603055179119,
-0.07630395144224167,
0.0782044529914856,
0.0001... |
c5dd0a04-5dfe-46fe-aaa6-287516bae110 | description: 'Keeps tables in RAM only
expiration_time_in_seconds
seconds after
last access. Can be used only with Log type tables.'
sidebar_label: 'Lazy'
sidebar_position: 20
slug: /engines/database-engines/lazy
title: 'Lazy'
doc_type: 'reference'
Lazy
Keeps tables in RAM only
expiration_time_in_seconds
seconds after last access. Can be used only with *Log tables.
It's optimized for storing many small *Log tables, for which there is a long time interval between accesses.
Creating a database {#creating-a-database}
sql
CREATE DATABASE testlazy
ENGINE = Lazy(expiration_time_in_seconds); | {"source_file": "lazy.md"} | [
0.029264051467180252,
0.06638266891241074,
-0.13531894981861115,
0.08976538479328156,
-0.07182925939559937,
-0.0036768552381545305,
0.02583124116063118,
0.06832089275121689,
-0.04110237956047058,
0.032771896570920944,
0.047223854809999466,
0.0929892510175705,
0.08094432950019836,
-0.002921... |
d5c6a7da-50f1-400b-a95a-1421ba0a025b | description: 'Documentation for Database Engines'
slug: /engines/database-engines/
toc_folder_title: 'Database Engines'
toc_priority: 27
toc_title: 'Introduction'
title: 'Database Engines'
doc_type: 'landing-page'
Database engines
Database engines allow you to work with tables. By default, ClickHouse uses the
Atomic
database engine, which provides configurable
table engines
and an
SQL dialect
.
Here is a complete list of available database engines. Follow the links for more details: | {"source_file": "index.md"} | [
-0.01852898858487606,
-0.1441253125667572,
-0.09130993485450745,
0.019537869840860367,
-0.07438353449106216,
-0.017240986227989197,
-0.0011416448978707194,
-0.028442196547985077,
-0.08266890794038773,
-0.007488257717341185,
-0.017043255269527435,
-0.013104422017931938,
0.05695727467536926,
... |
b257c45f-b890-44c6-ade9-b3cef3f027db | description: 'Allows to connect to databases on a remote PostgreSQL server.'
sidebar_label: 'PostgreSQL'
sidebar_position: 40
slug: /engines/database-engines/postgresql
title: 'PostgreSQL'
doc_type: 'guide'
PostgreSQL
Allows to connect to databases on a remote
PostgreSQL
server. Supports read and write operations (
SELECT
and
INSERT
queries) to exchange data between ClickHouse and PostgreSQL.
Gives the real-time access to table list and table structure from remote PostgreSQL with the help of
SHOW TABLES
and
DESCRIBE TABLE
queries.
Supports table structure modifications (
ALTER TABLE ... ADD|DROP COLUMN
). If
use_table_cache
parameter (see the Engine Parameters below) is set to
1
, the table structure is cached and not checked for being modified, but can be updated with
DETACH
and
ATTACH
queries.
Creating a database {#creating-a-database}
sql
CREATE DATABASE test_database
ENGINE = PostgreSQL('host:port', 'database', 'user', 'password'[, `schema`, `use_table_cache`]);
Engine Parameters
host:port
β PostgreSQL server address.
database
β Remote database name.
user
β PostgreSQL user.
password
β User password.
schema
β PostgreSQL schema.
use_table_cache
β Defines if the database table structure is cached or not. Optional. Default value:
0
.
Data types support {#data_types-support}
| PostgreSQL | ClickHouse |
|------------------|--------------------------------------------------------------|
| DATE |
Date
|
| TIMESTAMP |
DateTime
|
| REAL |
Float32
|
| DOUBLE |
Float64
|
| DECIMAL, NUMERIC |
Decimal
|
| SMALLINT |
Int16
|
| INTEGER |
Int32
|
| BIGINT |
Int64
|
| SERIAL |
UInt32
|
| BIGSERIAL |
UInt64
|
| TEXT, CHAR |
String
|
| INTEGER | Nullable(
Int32
)|
| ARRAY |
Array
|
Examples of use {#examples-of-use}
Database in ClickHouse, exchanging data with the PostgreSQL server:
sql
CREATE DATABASE test_database
ENGINE = PostgreSQL('postgres1:5432', 'test_database', 'postgres', 'mysecretpassword', 'schema_name',1);
sql
SHOW DATABASES;
text
ββnameβββββββββββ
β default β
β test_database β
β system β
βββββββββββββββββ
sql
SHOW TABLES FROM test_database;
text
ββnameββββββββ
β test_table β
ββββββββββββββ
Reading data from the PostgreSQL table:
sql
SELECT * FROM test_database.test_table;
text
ββidββ¬βvalueββ
β 1 β 2 β
ββββββ΄ββββββββ
Writing data to the PostgreSQL table:
sql
INSERT INTO test_database.test_table VALUES (3,4);
SELECT * FROM test_database.test_table;
text
ββint_idββ¬βvalueββ
β 1 β 2 β
β 3 β 4 β
ββββββββββ΄ββββββββ
Consider the table structure was modified in PostgreSQL:
sql
postgre> ALTER TABLE test_table ADD COLUMN data Text | {"source_file": "postgresql.md"} | [
0.016763737425208092,
-0.07333499938249588,
-0.10795202106237411,
0.06627729535102844,
-0.13100668787956238,
-0.03194274380803108,
-0.005875453818589449,
0.0009390448103658855,
-0.04059247300028801,
0.036058638244867325,
-0.009797054342925549,
0.0800168514251709,
0.021523229777812958,
-0.0... |
28454a54-86c0-470a-b024-a4ec1d793d2b | Consider the table structure was modified in PostgreSQL:
sql
postgre> ALTER TABLE test_table ADD COLUMN data Text
As the
use_table_cache
parameter was set to
1
when the database was created, the table structure in ClickHouse was cached and therefore not modified:
sql
DESCRIBE TABLE test_database.test_table;
text
ββnameββββ¬βtypeβββββββββββββββ
β id β Nullable(Integer) β
β value β Nullable(Integer) β
ββββββββββ΄ββββββββββββββββββββ
After detaching the table and attaching it again, the structure was updated:
sql
DETACH TABLE test_database.test_table;
ATTACH TABLE test_database.test_table;
DESCRIBE TABLE test_database.test_table;
text
ββnameββββ¬βtypeβββββββββββββββ
β id β Nullable(Integer) β
β value β Nullable(Integer) β
β data β Nullable(String) β
ββββββββββ΄ββββββββββββββββββββ
Related content {#related-content}
Blog:
ClickHouse and PostgreSQL - a match made in data heaven - part 1
Blog:
ClickHouse and PostgreSQL - a Match Made in Data Heaven - part 2 | {"source_file": "postgresql.md"} | [
0.04404635727405548,
-0.0441591776907444,
-0.029328692704439163,
0.03435489535331726,
-0.028190117329359055,
-0.06378066539764404,
0.026836946606636047,
-0.09709309041500092,
-0.003909605089575052,
0.017076127231121063,
0.1116856187582016,
0.06693655252456665,
-0.01598505489528179,
-0.0535... |
2f31e5a8-6960-450b-ab02-0335e4e4410e | description: 'Allows to instantly attach table/database from backups in read-only
mode.'
sidebar_label: 'Backup'
sidebar_position: 60
slug: /engines/database-engines/backup
title: 'Backup'
doc_type: 'reference'
Backup
Database backup allows to instantly attach table/database from
backups
in read-only mode.
Database backup works with both incremental and non-incremental backups.
Creating a database {#creating-a-database}
sql
CREATE DATABASE backup_database
ENGINE = Backup('database_name_inside_backup', 'backup_destination')
Backup destination can be any valid backup
destination
like
Disk
,
S3
,
File
.
With
Disk
backup destination, query to create database from backup looks like this:
sql
CREATE DATABASE backup_database
ENGINE = Backup('database_name_inside_backup', Disk('disk_name', 'backup_name'))
Engine Parameters
database_name_inside_backup
β Name of the database inside the backup.
backup_destination
β Backup destination.
Usage example {#usage-example}
Let's make an example with a
Disk
backup destination. Let's first setup backups disk in
storage.xml
:
xml
<storage_configuration>
<disks>
<backups>
<type>local</type>
<path>/home/ubuntu/ClickHouseWorkDir/backups/</path>
</backups>
</disks>
</storage_configuration>
<backups>
<allowed_disk>backups</allowed_disk>
<allowed_path>/home/ubuntu/ClickHouseWorkDir/backups/</allowed_path>
</backups>
Example of usage. Let's create test database, tables, insert some data and then create a backup:
```sql
CREATE DATABASE test_database;
CREATE TABLE test_database.test_table_1 (id UInt64, value String) ENGINE=MergeTree ORDER BY id;
INSERT INTO test_database.test_table_1 VALUES (0, 'test_database.test_table_1');
CREATE TABLE test_database.test_table_2 (id UInt64, value String) ENGINE=MergeTree ORDER BY id;
INSERT INTO test_database.test_table_2 VALUES (0, 'test_database.test_table_2');
CREATE TABLE test_database.test_table_3 (id UInt64, value String) ENGINE=MergeTree ORDER BY id;
INSERT INTO test_database.test_table_3 VALUES (0, 'test_database.test_table_3');
BACKUP DATABASE test_database TO Disk('backups', 'test_database_backup');
```
So now we have
test_database_backup
backup, let's create database Backup:
sql
CREATE DATABASE test_database_backup ENGINE = Backup('test_database', Disk('backups', 'test_database_backup'));
Now we can query any table from database:
```sql
SELECT id, value FROM test_database_backup.test_table_1;
ββidββ¬βvalueβββββββββββββββββββββββ
β 0 β test_database.test_table_1 β
ββββββ΄βββββββββββββββββββββββββββββ
SELECT id, value FROM test_database_backup.test_table_2;
ββidββ¬βvalueβββββββββββββββββββββββ
β 0 β test_database.test_table_2 β
ββββββ΄βββββββββββββββββββββββββββββ
SELECT id, value FROM test_database_backup.test_table_3;
ββidββ¬βvalueβββββββββββββββββββββββ
β 0 β test_database.test_table_3 β
ββββββ΄βββββββββββββββββββββββββββββ
``` | {"source_file": "backup.md"} | [
-0.03943031653761864,
-0.060963451862335205,
-0.07823812961578369,
0.05463040992617607,
-0.013524442911148071,
-0.0369180403649807,
-0.0029430126305669546,
0.04681454598903656,
-0.048245176672935486,
0.02108059637248516,
0.05471980199217796,
0.0528520829975605,
0.14239931106567383,
-0.0945... |
0c06d3d0-3f11-4202-867b-ff1ee0ef2c20 | SELECT id, value FROM test_database_backup.test_table_3;
ββidββ¬βvalueβββββββββββββββββββββββ
β 0 β test_database.test_table_3 β
ββββββ΄βββββββββββββββββββββββββββββ
```
It is also possible to work with this database Backup as with any ordinary database. For example query tables in it:
```sql
SELECT database, name FROM system.tables WHERE database = 'test_database_backup':
ββdatabaseββββββββββββββ¬βnameββββββββββ
β test_database_backup β test_table_1 β
β test_database_backup β test_table_2 β
β test_database_backup β test_table_3 β
ββββββββββββββββββββββββ΄βββββββββββββββ
``` | {"source_file": "backup.md"} | [
-0.01821807771921158,
0.011061858385801315,
-0.06536801904439926,
0.04032164067029953,
-0.019577879458665848,
0.01018915232270956,
0.05551857501268387,
0.027847163379192352,
-0.08745954930782318,
-0.024251971393823624,
0.029927538707852364,
0.022686854004859924,
0.11847513914108276,
-0.078... |
8c1ab70b-1564-446e-908d-dcdf68095b03 | description: 'Allows to connect to SQLite databases and perform
INSERT
and
SELECT
queries to exchange data between ClickHouse and SQLite.'
sidebar_label: 'SQLite'
sidebar_position: 55
slug: /engines/database-engines/sqlite
title: 'SQLite'
doc_type: 'reference'
SQLite
Allows to connect to
SQLite
database and perform
INSERT
and
SELECT
queries to exchange data between ClickHouse and SQLite.
Creating a database {#creating-a-database}
sql
CREATE DATABASE sqlite_database
ENGINE = SQLite('db_path')
Engine Parameters
db_path
β Path to a file with SQLite database.
Data types support {#data_types-support}
| SQLite | ClickHouse |
|---------------|---------------------------------------------------------|
| INTEGER |
Int32
|
| REAL |
Float32
|
| TEXT |
String
|
| BLOB |
String
|
Specifics and recommendations {#specifics-and-recommendations}
SQLite stores the entire database (definitions, tables, indices, and the data itself) as a single cross-platform file on a host machine. During writing SQLite locks the entire database file, therefore write operations are performed sequentially. Read operations can be multi-tasked.
SQLite does not require service management (such as startup scripts) or access control based on
GRANT
and passwords. Access control is handled by means of file-system permissions given to the database file itself.
Usage example {#usage-example}
Database in ClickHouse, connected to the SQLite:
sql
CREATE DATABASE sqlite_db ENGINE = SQLite('sqlite.db');
SHOW TABLES FROM sqlite_db;
text
βββnameββββ
β table1 β
β table2 β
βββββββββββ
Shows the tables:
sql
SELECT * FROM sqlite_db.table1;
text
ββcol1βββ¬βcol2ββ
β line1 β 1 β
β line2 β 2 β
β line3 β 3 β
βββββββββ΄βββββββ
Inserting data into SQLite table from ClickHouse table:
sql
CREATE TABLE clickhouse_table(`col1` String,`col2` Int16) ENGINE = MergeTree() ORDER BY col2;
INSERT INTO clickhouse_table VALUES ('text',10);
INSERT INTO sqlite_db.table1 SELECT * FROM clickhouse_table;
SELECT * FROM sqlite_db.table1;
text
ββcol1βββ¬βcol2ββ
β line1 β 1 β
β line2 β 2 β
β line3 β 3 β
β text β 10 β
βββββββββ΄βββββββ | {"source_file": "sqlite.md"} | [
-0.017251405864953995,
-0.09545736014842987,
-0.11135323345661163,
0.0956682562828064,
-0.09428068995475769,
-0.03206818550825119,
0.05771668255329132,
0.03414497524499893,
-0.08610229939222336,
0.012737395241856575,
0.012984338216483593,
-0.051892008632421494,
0.04121502861380577,
-0.0864... |
fa5904ae-fd0b-4555-9266-84d7d1901206 | description: 'The DataLakeCatalog database engine enables you to connect ClickHouse to external data catalogs and query open table format data'
sidebar_label: 'DataLakeCatalog'
slug: /engines/database-engines/datalakecatalog
title: 'DataLakeCatalog'
doc_type: 'reference'
DataLakeCatalog
The
DataLakeCatalog
database engine enables you to connect ClickHouse to external
data catalogs and query open table format data without the need for data duplication.
This transforms ClickHouse into a powerful query engine that works seamlessly with
your existing data lake infrastructure.
Supported catalogs {#supported-catalogs}
The
DataLakeCatalog
engine supports the following data catalogs:
AWS Glue Catalog
- For Iceberg tables in AWS environments
Databricks Unity Catalog
- For Delta Lake and Iceberg tables
Hive Metastore
- Traditional Hadoop ecosystem catalog
REST Catalogs
- Any catalog supporting the Iceberg REST specification
Creating a database {#creating-a-database}
You will need to enable the relevant settings below to use the
DataLakeCatalog
engine:
sql
SET allow_experimental_database_iceberg = 1;
SET allow_experimental_database_unity_catalog = 1;
SET allow_experimental_database_glue_catalog = 1;
SET allow_experimental_database_hms_catalog = 1;
Databases with the
DataLakeCatalog
engine can be created using the following syntax:
sql
CREATE DATABASE database_name
ENGINE = DataLakeCatalog(catalog_endpoint[, user, password])
SETTINGS
catalog_type,
[...]
The following settings are supported: | {"source_file": "datalake.md"} | [
-0.029304437339305878,
-0.10083812475204468,
-0.025762878358364105,
0.02804187312722206,
-0.0921056866645813,
-0.028171498328447342,
-0.04500579833984375,
0.0006151740089990199,
-0.06075345724821091,
-0.027014244347810745,
0.010478360578417778,
-0.009008313529193401,
0.08304978907108307,
-... |
e725bad4-09d0-451e-aeab-effe6a2fcb89 | sql
CREATE DATABASE database_name
ENGINE = DataLakeCatalog(catalog_endpoint[, user, password])
SETTINGS
catalog_type,
[...]
The following settings are supported:
| Setting | Description |
|-------------------------|-----------------------------------------------------------------------------------------|
|
catalog_type
| Type of catalog:
glue
,
unity
(Delta),
rest
(Iceberg),
hive
,
onelake
(Iceberg) |
|
warehouse
| The warehouse/database name to use in the catalog. |
|
catalog_credential
| Authentication credential for the catalog (e.g., API key or token) |
|
auth_header
| Custom HTTP header for authentication with the catalog service |
|
auth_scope
| OAuth2 scope for authentication (if using OAuth) |
|
storage_endpoint
| Endpoint URL for the underlying storage |
|
oauth_server_uri
| URI of the OAuth2 authorization server for authentication |
|
vended_credentials
| Boolean indicating whether to use vended credentials (AWS-specific) |
|
aws_access_key_id
| AWS access key ID for S3/Glue access (if not using vended credentials) |
|
aws_secret_access_key
| AWS secret access key for S3/Glue access (if not using vended credentials) |
|
region
| AWS region for the service (e.g.,
us-east-1
) |
Examples {#examples}
See below sections for examples of using the
DataLakeCatalog
engine:
Unity Catalog
Glue Catalog
OneLake Catalog
Can be used by enabling
allow_experimental_database_iceberg
or
allow_database_iceberg
.
sql
CREATE DATABASE database_name
ENGINE = DataLakeCatalog(catalog_endpoint)
SETTINGS
catalog_type = 'onelake',
warehouse = warehouse,
onelake_tenant_id = tenant_id,
oauth_server_uri = server_uri,
auth_scope = auth_scope,
onelake_client_id = client_id,
onelake_client_secret = client_secret;
SHOW TABLES IN databse_name;
SELECT count() from database_name.table_name; | {"source_file": "datalake.md"} | [
0.008703969419002533,
-0.06580943614244461,
-0.0950160101056099,
-0.005497032310813665,
-0.14157970249652863,
0.03920913487672806,
0.05276815593242645,
0.031631987541913986,
-0.08195933699607849,
-0.03245503455400467,
0.010884912684559822,
-0.10317549109458923,
0.08555946499109268,
-0.0530... |
d786fad7-4343-4cdc-af1d-3c36e3f002ad | description: 'Documentation for Table Engines'
slug: /engines/table-engines/
toc_folder_title: 'Table Engines'
toc_priority: 26
toc_title: 'Introduction'
title: 'Table Engines'
doc_type: 'reference'
Table engines
The table engine (type of table) determines:
How and where data is stored, where to write it to, and where to read it from.
Which queries are supported, and how.
Concurrent data access.
Use of indexes, if present.
Whether multithread request execution is possible.
Data replication parameters.
Engine families {#engine-families}
MergeTree {#mergetree}
The most universal and functional table engines for high-load tasks. The property shared by these engines is quick data insertion with subsequent background data processing.
MergeTree
family engines support data replication (with
Replicated*
versions of engines), partitioning, secondary data-skipping indexes, and other features not supported in other engines.
Engines in the family:
| MergeTree Engines |
|-------------------------------------------------------------------------------------------------------------------------------------------|
|
MergeTree
|
|
ReplacingMergeTree
|
|
SummingMergeTree
|
|
AggregatingMergeTree
|
|
CollapsingMergeTree
|
|
VersionedCollapsingMergeTree
|
|
GraphiteMergeTree
|
|
CoalescingMergeTree
|
Log {#log}
Lightweight
engines
with minimum functionality. They're the most effective when you need to quickly write many small tables (up to approximately 1 million rows) and read them later as a whole.
Engines in the family:
| Log Engines |
|----------------------------------------------------------------------------|
|
TinyLog
|
|
StripeLog
|
|
Log
|
Integration engines {#integration-engines}
Engines for communicating with other data storage and processing systems.
Engines in the family:
| Integration Engines |
|---------------------------------------------------------------------------------|
|
ODBC
|
|
JDBC
|
|
MySQL
|
|
MongoDB
|
|
Redis
|
|
HDFS
|
|
S3
|
|
Kafka
|
|
EmbeddedRocksDB
|
|
RabbitMQ
|
|
PostgreSQL
|
|
S3Queue
|
|
TimeSeries
|
Special engines {#special-engines}
Engines in the family: | {"source_file": "index.md"} | [
-0.0879526287317276,
-0.09353197365999222,
-0.03825991973280907,
0.035764601081609726,
-0.08514824509620667,
-0.0833941400051117,
-0.07815267145633698,
-0.0260081198066473,
-0.04759083315730095,
0.006122089456766844,
-0.021719951182603836,
0.0031507983803749084,
0.010420612059533596,
-0.09... |
991d7a17-3850-4d64-87cf-66f4ad308c8f | Special engines {#special-engines}
Engines in the family:
| Special Engines |
|---------------------------------------------------------------|
|
Distributed
|
|
Dictionary
|
|
Merge
|
|
Executable
|
|
File
|
|
Null
|
|
Set
|
|
Join
|
|
URL
|
|
View
|
|
Memory
|
|
Buffer
|
|
External Data
|
|
GenerateRandom
|
|
KeeperMap
|
|
FileLog
|
Virtual columns {#table_engines-virtual_columns}
A virtual column is an integral table engine attribute that is defined in the engine source code.
You shouldn't specify virtual columns in the
CREATE TABLE
query, and you can't see them in
SHOW CREATE TABLE
and
DESCRIBE TABLE
query results. Virtual columns are also read-only, so you can't insert data into virtual columns.
To select data from a virtual column, you must specify its name in the
SELECT
query.
SELECT *
does not return values from virtual columns.
If you create a table with a column that has the same name as one of the table virtual columns, the virtual column becomes inaccessible. We do not recommend doing this. To help avoid conflicts, virtual column names are usually prefixed with an underscore.
_table
β Contains the name of the table from which data was read. Type:
String
.
Regardless of the table engine being used, each table includes a universal virtual column named
_table
.
When querying a table with the merge table engine, you can set the constant conditions on
_table
in the
WHERE/PREWHERE
clause (for example,
WHERE _table='xyz'
). In this case the read operation is performed only for that tables where the condition on
_table
is satisfied, so the
_table
column acts as an index.
When using queries formatted like
SELECT ... FROM (... UNION ALL ...)
, we can determine which actual table the returned rows originate from by specifying the
_table
column. | {"source_file": "index.md"} | [
0.002416649367660284,
-0.0052662440575659275,
-0.1387542486190796,
0.037067703902721405,
-0.021092014387249947,
-0.07111941277980804,
-0.029224388301372528,
0.011122547090053558,
-0.022730298340320587,
0.03995051980018616,
0.08320379257202148,
-0.08587205410003662,
0.010402677580714226,
-0... |
6aa4b289-8942-476b-b241-8862cfdc4c20 | description: 'Documentation for the StripeLog table engine'
slug: /engines/table-engines/log-family/stripelog
toc_priority: 32
toc_title: 'StripeLog'
title: 'StripeLog table engine'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
StripeLog table engine
This engine belongs to the family of log engines. See the common properties of log engines and their differences in the
Log Engine Family
article.
Use this engine in scenarios when you need to write many tables with a small amount of data (less than 1 million rows). For example, this table can be used to store incoming data batches for transformation where atomic processing of them is required. 100k instances of this table type are viable for a ClickHouse server. This table engine should be preferred over
Log
when a high number of tables are required. This is at the expense of read efficiency.
Creating a table {#table_engines-stripelog-creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
column1_name [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
column2_name [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = StripeLog
See the detailed description of the
CREATE TABLE
query.
Writing the data {#table_engines-stripelog-writing-the-data}
The
StripeLog
engine stores all the columns in one file. For each
INSERT
query, ClickHouse appends the data block to the end of a table file, writing columns one by one.
For each table ClickHouse writes the files:
data.bin
β Data file.
index.mrk
β File with marks. Marks contain offsets for each column of each data block inserted.
The
StripeLog
engine does not support the
ALTER UPDATE
and
ALTER DELETE
operations.
Reading the data {#table_engines-stripelog-reading-the-data}
The file with marks allows ClickHouse to parallelize the reading of data. This means that a
SELECT
query returns rows in an unpredictable order. Use the
ORDER BY
clause to sort rows.
Example of use {#table_engines-stripelog-example-of-use}
Creating a table:
sql
CREATE TABLE stripe_log_table
(
timestamp DateTime,
message_type String,
message String
)
ENGINE = StripeLog
Inserting data:
sql
INSERT INTO stripe_log_table VALUES (now(),'REGULAR','The first regular message')
INSERT INTO stripe_log_table VALUES (now(),'REGULAR','The second regular message'),(now(),'WARNING','The first warning message')
We used two
INSERT
queries to create two data blocks inside the
data.bin
file.
ClickHouse uses multiple threads when selecting data. Each thread reads a separate data block and returns resulting rows independently as it finishes. As a result, the order of blocks of rows in the output does not match the order of the same blocks in the input in most cases. For example:
sql
SELECT * FROM stripe_log_table | {"source_file": "stripelog.md"} | [
-0.03630475699901581,
-0.03372776880860329,
-0.09607428312301636,
0.029896367341279984,
-0.02840600721538067,
-0.05412723869085312,
0.02222321182489395,
-0.008898003026843071,
-0.007096686866134405,
0.09892211109399796,
-0.02849193662405014,
-0.03529592975974083,
0.005787511356174946,
0.01... |
9bc21b05-ec4b-4396-a7e7-1f1427da38f2 | sql
SELECT * FROM stripe_log_table
text
ββββββββββββtimestampββ¬βmessage_typeββ¬βmessageβββββββββββββββββββββ
β 2019-01-18 14:27:32 β REGULAR β The second regular message β
β 2019-01-18 14:34:53 β WARNING β The first warning message β
βββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββββββ
ββββββββββββtimestampββ¬βmessage_typeββ¬βmessageββββββββββββββββββββ
β 2019-01-18 14:23:43 β REGULAR β The first regular message β
βββββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββββββββββββ
Sorting the results (ascending order by default):
sql
SELECT * FROM stripe_log_table ORDER BY timestamp
text
ββββββββββββtimestampββ¬βmessage_typeββ¬βmessageβββββββββββββββββββββ
β 2019-01-18 14:23:43 β REGULAR β The first regular message β
β 2019-01-18 14:27:32 β REGULAR β The second regular message β
β 2019-01-18 14:34:53 β WARNING β The first warning message β
βββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββββββ | {"source_file": "stripelog.md"} | [
0.015857240185141563,
0.010535926558077335,
0.05608905851840973,
0.027100609615445137,
-0.003078615292906761,
0.00027803133707493544,
0.07916858792304993,
0.01591823808848858,
0.056849949061870575,
0.06500347703695297,
0.07415701448917389,
0.030522741377353668,
-0.01753285713493824,
0.0401... |
eea98ecf-7e0b-463a-bd68-1d6dc00066ef | description: 'Documentation for the TinyLog table engine'
slug: /engines/table-engines/log-family/tinylog
toc_priority: 34
toc_title: 'TinyLog'
title: 'TinyLog table engine'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
TinyLog table engine
The engine belongs to the log engine family. See
Log Engine Family
for common properties of log engines and their differences.
This table engine is typically used with the write-once method: write data one time, then read it as many times as necessary. For example, you can use
TinyLog
-type tables for intermediary data that is processed in small batches. Note that storing data in a large number of small tables is inefficient.
Queries are executed in a single stream. In other words, this engine is intended for relatively small tables (up to about 1,000,000 rows). It makes sense to use this table engine if you have many small tables, since it's simpler than the
Log
engine (fewer files need to be opened).
Characteristics {#characteristics}
Simpler Structure
: Unlike the Log engine, TinyLog does not use mark files. This reduces complexity but also limits performance optimizations for large datasets.
Single Stream Queries
: Queries on TinyLog tables are executed in a single stream, making it suitable for relatively small tables, typically up to 1,000,000 rows.
Efficient for Small Table
: The simplicity of the TinyLog engine makes it advantageous when managing many small tables, as it requires fewer file operations compared to the Log engine.
Unlike the Log engine, TinyLog does not use mark files. This reduces complexity but also limits performance optimizations for larger datasets.
Creating a table {#table_engines-tinylog-creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
column1_name [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
column2_name [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = TinyLog
See the detailed description of the
CREATE TABLE
query.
Writing the data {#table_engines-tinylog-writing-the-data}
The
TinyLog
engine stores all the columns in one file. For each
INSERT
query, ClickHouse appends the data block to the end of a table file, writing columns one by one.
For each table ClickHouse writes the files:
<column>.bin
: A data file for each column, containing the serialized and compressed data.
The
TinyLog
engine does not support the
ALTER UPDATE
and
ALTER DELETE
operations.
Example of use {#table_engines-tinylog-example-of-use}
Creating a table:
sql
CREATE TABLE tiny_log_table
(
timestamp DateTime,
message_type String,
message String
)
ENGINE = TinyLog
Inserting data:
sql
INSERT INTO tiny_log_table VALUES (now(),'REGULAR','The first regular message')
INSERT INTO tiny_log_table VALUES (now(),'REGULAR','The second regular message'),(now(),'WARNING','The first warning message') | {"source_file": "tinylog.md"} | [
0.02934819832444191,
0.0016072971047833562,
-0.036235012114048004,
0.07585776597261429,
0.04208587855100632,
-0.017381643876433372,
0.0065039522014558315,
0.04516295716166496,
-0.009885049425065517,
0.10456111282110214,
0.002117080381140113,
-0.029079074040055275,
0.03028981387615204,
-0.0... |
8bacc0ee-2ea1-4957-a8ca-2e119101d400 | We used two
INSERT
queries to create two data blocks inside the
<column>.bin
files.
ClickHouse uses a single stream selecting data. As a result, the order of blocks of rows in the output matches the order of the same blocks in the input. For example:
sql
SELECT * FROM tiny_log_table
text
ββββββββββββtimestampββ¬βmessage_typeββ¬βmessageβββββββββββββββββββββ
β 2024-12-10 13:11:58 β REGULAR β The first regular message β
β 2024-12-10 13:12:12 β REGULAR β The second regular message β
β 2024-12-10 13:12:12 β WARNING β The first warning message β
βββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββββββ | {"source_file": "tinylog.md"} | [
0.013330896385014057,
-0.028412951156497,
-0.019263003021478653,
0.02538764663040638,
0.026833582669496536,
-0.022072384133934975,
-0.00028756182291544974,
-0.03531952574849129,
0.026547422632575035,
0.04375561326742172,
0.06806556135416031,
0.043091509491205215,
0.04797732084989548,
-0.08... |
d758f88b-e234-435c-900b-115db139da91 | description: 'Documentation for Log'
slug: /engines/table-engines/log-family/log
toc_priority: 33
toc_title: 'Log'
title: 'Log table engine'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
Log table engine
The engine belongs to the family of
Log
engines. See the common properties of
Log
engines and their differences in the
Log Engine Family
article.
Log
differs from
TinyLog
in that a small file of "marks" resides with the column files. These marks are written on every data block and contain offsets that indicate where to start reading the file in order to skip the specified number of rows. This makes it possible to read table data in multiple threads.
For concurrent data access, the read operations can be performed simultaneously, while write operations block reads and each other.
The
Log
engine does not support indexes. Similarly, if writing to a table failed, the table is broken, and reading from it returns an error. The
Log
engine is appropriate for temporary data, write-once tables, and for testing or demonstration purposes.
Creating a table {#table_engines-log-creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
column1_name [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
column2_name [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = Log
See the detailed description of the
CREATE TABLE
query.
Writing the data {#table_engines-log-writing-the-data}
The
Log
engine efficiently stores data by writing each column to its own file. For every table, the Log engine writes the following files to the specified storage path:
<column>.bin
: A data file for each column, containing the serialized and compressed data.
__marks.mrk
: A marks file, storing offsets and row counts for each data block inserted. Marks are used to facilitate efficient query execution by allowing the engine to skip irrelevant data blocks during reads.
Writing process {#writing-process}
When data is written to a
Log
table:
Data is serialized and compressed into blocks.
For each column, the compressed data is appended to its respective
<column>.bin
file.
Corresponding entries are added to the
__marks.mrk
file to record the offset and row count of the newly inserted data.
Reading the data {#table_engines-log-reading-the-data}
The file with marks allows ClickHouse to parallelize the reading of data. This means that a
SELECT
query returns rows in an unpredictable order. Use the
ORDER BY
clause to sort rows.
Example of use {#table_engines-log-example-of-use}
Creating a table:
sql
CREATE TABLE log_table
(
timestamp DateTime,
message_type String,
message String
)
ENGINE = Log
Inserting data:
sql
INSERT INTO log_table VALUES (now(),'REGULAR','The first regular message')
INSERT INTO log_table VALUES (now(),'REGULAR','The second regular message'),(now(),'WARNING','The first warning message') | {"source_file": "log.md"} | [
-0.009264801628887653,
-0.022079305723309517,
-0.04121316969394684,
0.04943642020225525,
0.054009951651096344,
-0.06738809496164322,
-0.0004801162285730243,
0.015636611729860306,
0.024140052497386932,
0.07900499552488327,
0.02167380414903164,
0.00012051147496094927,
0.027964016422629356,
-... |
5d3543da-44f2-454e-b79d-cb4df77610a2 | We used two
INSERT
queries to create two data blocks inside the
<column>.bin
files.
ClickHouse uses multiple threads when selecting data. Each thread reads a separate data block and returns resulting rows independently as it finishes. As a result, the order of blocks of rows in the output may not match the order of the same blocks in the input. For example:
sql
SELECT * FROM log_table
text
ββββββββββββtimestampββ¬βmessage_typeββ¬βmessageβββββββββββββββββββββ
β 2019-01-18 14:27:32 β REGULAR β The second regular message β
β 2019-01-18 14:34:53 β WARNING β The first warning message β
βββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββββββ
ββββββββββββtimestampββ¬βmessage_typeββ¬βmessageββββββββββββββββββββ
β 2019-01-18 14:23:43 β REGULAR β The first regular message β
βββββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββββββββββββ
Sorting the results (ascending order by default):
sql
SELECT * FROM log_table ORDER BY timestamp
text
ββββββββββββtimestampββ¬βmessage_typeββ¬βmessageβββββββββββββββββββββ
β 2019-01-18 14:23:43 β REGULAR β The first regular message β
β 2019-01-18 14:27:32 β REGULAR β The second regular message β
β 2019-01-18 14:34:53 β WARNING β The first warning message β
βββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββββββ | {"source_file": "log.md"} | [
-0.009679094888269901,
-0.05480460822582245,
0.056175872683525085,
0.02425192855298519,
0.003229742869734764,
-0.06675982475280762,
-0.007189629133790731,
-0.07884824275970459,
0.05675841495394707,
0.04857856035232544,
0.07151751220226288,
0.023440822958946228,
0.04354200139641762,
-0.1124... |
e219f2ab-e75e-42a8-b17e-fcab592915d4 | description: 'Documentation for the Log engine family'
sidebar_label: 'Log family'
sidebar_position: 20
slug: /engines/table-engines/log-family/
title: 'Log engine family'
doc_type: 'guide'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
Log table engine family
These engines were developed for scenarios when you need to quickly write many small tables (up to about 1 million rows) and read them later as a whole.
Engines of the family:
| Log Engines |
|---------------------------------------------------------------------|
|
StripeLog
|
|
Log
|
|
TinyLog
|
Log
family table engines can store data to
HDFS
or
S3
distributed file systems.
:::warning This engine is not for log data.
Despite the name, *Log table engines are not meant for the storage of log data. They should only be used for small volumes which need to be written quickly.
:::
Common properties {#common-properties}
Engines:
Store data on a disk.
Append data to the end of file when writing.
Support locks for concurrent data access.
During
INSERT
queries, the table is locked, and other queries for reading and writing data both wait for the table to unlock. If there are no data writing queries, any number of data reading queries can be performed concurrently.
Do not support
mutations
.
Do not support indexes.
This means that
SELECT
queries for ranges of data are not efficient.
Do not write data atomically.
You can get a table with corrupted data if something breaks the write operation, for example, abnormal server shutdown.
Differences {#differences}
The
TinyLog
engine is the simplest in the family and provides the poorest functionality and lowest efficiency. The
TinyLog
engine does not support parallel data reading by several threads in a single query. It reads data slower than other engines in the family that support parallel reading from a single query and it uses almost as many file descriptors as the
Log
engine because it stores each column in a separate file. Use it only in simple scenarios.
The
Log
and
StripeLog
engines support parallel data reading. When reading data, ClickHouse uses multiple threads. Each thread processes a separate data block. The
Log
engine uses a separate file for each column of the table.
StripeLog
stores all the data in one file. As a result, the
StripeLog
engine uses fewer file descriptors, but the
Log
engine provides higher efficiency when reading data. | {"source_file": "index.md"} | [
-0.01693090796470642,
-0.006005195900797844,
-0.07017701119184494,
0.026462575420737267,
0.05904192477464676,
-0.021073127165436745,
-0.031124668195843697,
-0.01743466593325138,
-0.002187232719734311,
0.03422340005636215,
0.02101590298116207,
-0.015420232899487019,
0.041646216064691544,
-0... |
987dec88-da0f-45ec-a280-0224b48507de | description: 'Buffers the data to write in RAM, periodically flushing it to another
table. During the read operation, data is read from the buffer and the other table
simultaneously.'
sidebar_label: 'Buffer'
sidebar_position: 120
slug: /engines/table-engines/special/buffer
title: 'Buffer table engine'
doc_type: 'reference'
Buffer table engine
Buffers the data to write in RAM, periodically flushing it to another table. During the read operation, data is read from the buffer and the other table simultaneously.
:::note
A recommended alternative to the Buffer Table Engine is enabling
asynchronous inserts
.
:::
sql
Buffer(database, table, num_layers, min_time, max_time, min_rows, max_rows, min_bytes, max_bytes [,flush_time [,flush_rows [,flush_bytes]]])
Engine parameters {#engine-parameters}
database
{#database}
database
β Database name. You can use
currentDatabase()
or another constant expression that returns a string.
table
{#table}
table
β Table to flush data to.
num_layers
{#num_layers}
num_layers
β Parallelism layer. Physically, the table will be represented as
num_layers
of independent buffers.
min_time
,
max_time
,
min_rows
,
max_rows
,
min_bytes
, and
max_bytes
{#min_time-max_time-min_rows-max_rows-min_bytes-and-max_bytes}
Conditions for flushing data from the buffer.
Optional engine parameters {#optional-engine-parameters}
flush_time
,
flush_rows
, and
flush_bytes
{#flush_time-flush_rows-and-flush_bytes}
Conditions for flushing data from the buffer in the background (omitted or zero means no
flush*
parameters).
Data is flushed from the buffer and written to the destination table if all the
min*
conditions or at least one
max*
condition are met.
Also, if at least one
flush*
condition is met, a flush is initiated in the background. This differs from
max*
since
flush*
allows you to configure background flushes separately to avoid adding latency for
INSERT
queries into Buffer tables.
min_time
,
max_time
, and
flush_time
{#min_time-max_time-and-flush_time}
Condition for the time in seconds from the moment of the first write to the buffer.
min_rows
,
max_rows
, and
flush_rows
{#min_rows-max_rows-and-flush_rows}
Condition for the number of rows in the buffer.
min_bytes
,
max_bytes
, and
flush_bytes
{#min_bytes-max_bytes-and-flush_bytes}
Condition for the number of bytes in the buffer.
During the write operation, data is inserted into one or more random buffers (configured with
num_layers
). Or, if the data part to insert is large enough (greater than
max_rows
or
max_bytes
), it is written directly to the destination table, omitting the buffer.
The conditions for flushing the data are calculated separately for each of the
num_layers
buffers. For example, if
num_layers = 16
and
max_bytes = 100000000
, the maximum RAM consumption is 1.6 GB.
Example: | {"source_file": "buffer.md"} | [
0.043253663927316666,
-0.02470880188047886,
-0.07619745284318924,
0.09402518719434738,
-0.09276474267244339,
-0.006760335061699152,
0.02301831543445587,
-0.023372719064354897,
-0.029600221663713455,
0.027178339660167694,
0.019960271194577217,
0.010829064063727856,
0.058558184653520584,
-0.... |
bf605fb7-95a4-4321-afb7-4262d4549eac | Example:
sql
CREATE TABLE merge.hits_buffer AS merge.hits ENGINE = Buffer(merge, hits, 1, 10, 100, 10000, 1000000, 10000000, 100000000)
Creating a
merge.hits_buffer
table with the same structure as
merge.hits
and using the Buffer engine. When writing to this table, data is buffered in RAM and later written to the 'merge.hits' table. A single buffer is created and the data is flushed if either:
- 100 seconds have passed since the last flush (
max_time
) or
- 1 million rows have been written (
max_rows
) or
- 100 MB of data have been written (
max_bytes
) or
- 10 seconds have passed (
min_time
) and 10,000 rows (
min_rows
) and 10 MB (
min_bytes
) of data have been written
For example, if just one row has been written, after 100 seconds, it will be flushed, no matter what. But if many rows have been written, the data will be flushed sooner.
When the server is stopped, with
DROP TABLE
or
DETACH TABLE
, buffered data is also flushed to the destination table.
You can set empty strings in single quotation marks for the database and table name. This indicates the absence of a destination table. In this case, when the data flush conditions are reached, the buffer is simply cleared. This may be useful for keeping a window of data in memory.
When reading from a Buffer table, data is processed both from the buffer and from the destination table (if there is one).
Note that the Buffer table does not support an index. In other words, data in the buffer is fully scanned, which might be slow for large buffers. (For data in a subordinate table, the index that it supports will be used.)
If the set of columns in the Buffer table does not match the set of columns in a subordinate table, a subset of columns that exist in both tables is inserted.
If the types do not match for one of the columns in the Buffer table and a subordinate table, an error message is entered in the server log, and the buffer is cleared.
The same happens if the subordinate table does not exist when the buffer is flushed.
:::note
Running ALTER on the Buffer table in releases made before 26 Oct 2021 will cause a
Block structure mismatch
error (see
#15117
and
#30565
), so deleting the Buffer table and then recreating is the only option. Check that this error is fixed in your release before trying to run ALTER on the Buffer table.
:::
If the server is restarted abnormally, the data in the buffer is lost.
FINAL
and
SAMPLE
do not work correctly for Buffer tables. These conditions are passed to the destination table but are not used for processing data in the buffer. If these features are required, we recommend only using the Buffer table for writing while reading from the destination table.
When adding data to a Buffer table, one of the buffers is locked. This causes delays if a read operation is simultaneously being performed from the table. | {"source_file": "buffer.md"} | [
0.04474625736474991,
-0.048017241060733795,
-0.024037163704633713,
0.009945441968739033,
-0.10415895283222198,
-0.07874604314565659,
0.004464461002498865,
0.018023129552602768,
0.06295707076787949,
0.05105629190802574,
0.04608936607837677,
0.058605510741472244,
0.02303171530365944,
-0.1241... |
d9c2b1d0-5eaf-4a24-a8d6-62bd94db0d35 | When adding data to a Buffer table, one of the buffers is locked. This causes delays if a read operation is simultaneously being performed from the table.
Data that is inserted into a Buffer table may end up in the subordinate table in a different order and in different blocks. Because of this, a Buffer table is difficult to use for writing to a CollapsingMergeTree correctly. To avoid problems, you can set
num_layers
to 1.
If the destination table is replicated, some expected characteristics of replicated tables are lost when writing to a Buffer table. The random changes to the order of rows and sizes of data parts cause data deduplication to quit working, which means it is not possible to have a reliable 'exactly once' write to replicated tables.
Due to these disadvantages, we can only recommend using a Buffer table in rare cases.
A Buffer table is used when too many INSERTs are received from a large number of servers over a unit of time, and data can't be buffered before insertion, which means the INSERTs can't run fast enough.
Note that it does not make sense to insert data one row at a time, even for Buffer tables. This will only produce a speed of a few thousand rows per second while inserting larger blocks of data can produce over a million rows per second. | {"source_file": "buffer.md"} | [
0.00815369002521038,
-0.05377475172281265,
0.007692419923841953,
0.018400421366095543,
-0.09253144264221191,
-0.07058420777320862,
-0.02791673317551613,
0.0037735726218670607,
0.033372912555933,
0.05931088700890541,
0.024089720100164413,
0.03001703880727291,
0.0066077206283807755,
-0.08827... |
692c3552-b114-4c91-8c43-429adee4f2aa | description: 'The
Executable
and
ExecutablePool
table engines allow you to define
a table whose rows are generated from a script that you define (by writing rows
to
stdout
).'
sidebar_label: 'Executable/ExecutablePool'
sidebar_position: 40
slug: /engines/table-engines/special/executable
title: 'Executable and ExecutablePool table engines'
doc_type: 'reference'
Executable and ExecutablePool table engines
The
Executable
and
ExecutablePool
table engines allow you to define a table whose rows are generated from a script that you define (by writing rows to
stdout
). The executable script is stored in the
users_scripts
directory and can read data from any source.
Executable
tables: the script is run on every query
ExecutablePool
tables: maintains a pool of persistent processes, and takes processes from the pool for reads
You can optionally include one or more input queries that stream their results to
stdin
for the script to read.
Creating an
Executable
table {#creating-an-executable-table}
The
Executable
table engine requires two parameters: the name of the script and the format of the incoming data. You can optionally pass in one or more input queries:
sql
Executable(script_name, format, [input_query...])
Here are the relevant settings for an
Executable
table:
send_chunk_header
Description: Send the number of rows in each chunk before sending a chunk to process. This setting can help to write your script in a more efficient way to preallocate some resources
Default value: false
command_termination_timeout
Description: Command termination timeout in seconds
Default value: 10
command_read_timeout
Description: Timeout for reading data from command stdout in milliseconds
Default value: 10000
command_write_timeout
Description: Timeout for writing data to command stdin in milliseconds
Default value: 10000
Let's look at an example. The following Python script is named
my_script.py
and is saved in the
user_scripts
folder. It reads in a number
i
and prints
i
random strings, with each string preceded by a number that is separated by a tab:
```python
!/usr/bin/python3
import sys
import string
import random
def main():
# Read input value
for number in sys.stdin:
i = int(number)
# Generate some random rows
for id in range(0, i):
letters = string.ascii_letters
random_string = ''.join(random.choices(letters ,k=10))
print(str(id) + '\t' + random_string + '\n', end='')
# Flush results to stdout
sys.stdout.flush()
if
name
== "
main
":
main()
```
The following
my_executable_table
is built from the output of
my_script.py
, which will generate 10 random strings every time you run a
SELECT
from
my_executable_table
:
sql
CREATE TABLE my_executable_table (
x UInt32,
y String
)
ENGINE = Executable('my_script.py', TabSeparated, (SELECT 10)) | {"source_file": "executable.md"} | [
0.017186807468533516,
-0.02254047617316246,
-0.1180926263332367,
0.026358725503087044,
-0.03238048776984215,
-0.046525221318006516,
0.03970028832554817,
0.045386940240859985,
-0.03316667675971985,
0.016854623332619667,
0.000728039420209825,
-0.02137892320752144,
0.04505396634340286,
-0.060... |
be40e362-861e-4cbb-ada2-a6ac56a9cdb9 | sql
CREATE TABLE my_executable_table (
x UInt32,
y String
)
ENGINE = Executable('my_script.py', TabSeparated, (SELECT 10))
Creating the table returns immediately and does not invoke the script. Querying
my_executable_table
causes the script to be invoked:
sql
SELECT * FROM my_executable_table
response
ββxββ¬βyβββββββββββ
β 0 β BsnKBsNGNH β
β 1 β mgHfBCUrWM β
β 2 β iDQAVhlygr β
β 3 β uNGwDuXyCk β
β 4 β GcFdQWvoLB β
β 5 β UkciuuOTVO β
β 6 β HoKeCdHkbs β
β 7 β xRvySxqAcR β
β 8 β LKbXPHpyDI β
β 9 β zxogHTzEVV β
βββββ΄βββββββββββββ
Passing query results to a script {#passing-query-results-to-a-script}
Users of the Hacker News website leave comments. Python contains a natural language processing toolkit (
nltk
) with a
SentimentIntensityAnalyzer
for determining if comments are positive, negative, or neutral - including assigning a value between -1 (a very negative comment) and 1 (a very positive comment). Let's create an
Executable
table that computes the sentiment of Hacker News comments using
nltk
.
This example uses the
hackernews
table described
here
. The
hackernews
table includes an
id
column of type
UInt64
and a
String
column named
comment
. Let's start by defining the
Executable
table:
sql
CREATE TABLE sentiment (
id UInt64,
sentiment Float32
)
ENGINE = Executable(
'sentiment.py',
TabSeparated,
(SELECT id, comment FROM hackernews WHERE id > 0 AND comment != '' LIMIT 20)
);
Some comments about the
sentiment
table:
The file
sentiment.py
is saved in the
user_scripts
folder (the default folder of the
user_scripts_path
setting)
The
TabSeparated
format means our Python script needs to generate rows of raw data that contain tab-separated values
The query selects two columns from
hackernews
. The Python script will need to parse out those column values from the incoming rows
Here is the definition of
sentiment.py
:
```python
!/usr/local/bin/python3.9
import sys
import nltk
from nltk.sentiment import SentimentIntensityAnalyzer
def main():
sentiment_analyzer = SentimentIntensityAnalyzer()
while True:
try:
row = sys.stdin.readline()
if row == '':
break
split_line = row.split("\t")
id = str(split_line[0])
comment = split_line[1]
score = sentiment_analyzer.polarity_scores(comment)['compound']
print(id + '\t' + str(score) + '\n', end='')
sys.stdout.flush()
except BaseException as x:
break
if
name
== "
main
":
main()
```
Some comments about our Python script:
For this to work, you will need to run
nltk.downloader.download('vader_lexicon')
. This could have been placed in the script, but then it would have been downloaded every time a query was executed on the
sentiment
table - which is not efficient
Each value of
row
is going to be a row in the result set of
SELECT id, comment FROM hackernews WHERE id > 0 AND comment != '' LIMIT 20 | {"source_file": "executable.md"} | [
-0.003473132150247693,
-0.043549731373786926,
-0.06489360332489014,
0.07580388337373734,
-0.02337300404906273,
-0.10339780896902084,
0.08227306604385376,
0.05547507107257843,
-0.01787039078772068,
0.05620480328798294,
0.023754792287945747,
-0.05057476460933685,
0.10262863337993622,
-0.0521... |
85125cb5-c474-45fe-925f-b29d333e0c17 | Each value of
row
is going to be a row in the result set of
SELECT id, comment FROM hackernews WHERE id > 0 AND comment != '' LIMIT 20
The incoming row is tab-separated, so we parse out the
id
and
comment
using the Python
split
function
The result of
polarity_scores
is a JSON object with a handful of values. We decided to just grab the
compound
value of this JSON object
Recall that the
sentiment
table in ClickHouse uses the
TabSeparated
format and contains two columns, so our
print
function separates those columns with a tab
Every time you write a query that selects rows from the
sentiment
table, the
SELECT id, comment FROM hackernews WHERE id > 0 AND comment != '' LIMIT 20
query is executed and the result is passed to
sentiment.py
. Let's test it out:
sql
SELECT *
FROM sentiment
The response looks like:
response
ββββββββidββ¬βsentimentββ
β 7398199 β 0.4404 β
β 21640317 β 0.1779 β
β 21462000 β 0 β
β 25168863 β 0 β
β 25168978 β -0.1531 β
β 25169359 β 0 β
β 25169394 β -0.9231 β
β 25169766 β 0.4137 β
β 25172570 β 0.7469 β
β 25173687 β 0.6249 β
β 28291534 β 0 β
β 28291669 β -0.4767 β
β 28291731 β 0 β
β 28291949 β -0.4767 β
β 28292004 β 0.3612 β
β 28292050 β -0.296 β
β 28292322 β 0 β
β 28295172 β 0.7717 β
β 28295288 β 0.4404 β
β 21465723 β -0.6956 β
ββββββββββββ΄ββββββββββββ
Creating an
ExecutablePool
table {#creating-an-executablepool-table}
The syntax for
ExecutablePool
is similar to
Executable
, but there are a couple of relevant settings unique to an
ExecutablePool
table:
pool_size
Description: Processes pool size. If size is 0, then there are no size restrictions
Default value: 16
max_command_execution_time
Description: Max command execution time in seconds
Default value: 10
We can easily convert the
sentiment
table above to use
ExecutablePool
instead of
Executable
:
sql
CREATE TABLE sentiment_pooled (
id UInt64,
sentiment Float32
)
ENGINE = ExecutablePool(
'sentiment.py',
TabSeparated,
(SELECT id, comment FROM hackernews WHERE id > 0 AND comment != '' LIMIT 20000)
)
SETTINGS
pool_size = 4;
ClickHouse will maintain 4 processes on-demand when your client queries the
sentiment_pooled
table. | {"source_file": "executable.md"} | [
-0.057493697851896286,
0.010856209322810173,
-0.05293198302388191,
0.07035055756568909,
0.0006465190672315657,
-0.06064066290855408,
0.0817391648888588,
0.016307953745126724,
0.00014187702618073672,
0.010173629969358444,
0.08705714344978333,
0.02020670659840107,
0.05076619237661362,
-0.043... |
5de2283b-75f9-40f7-bb05-7c0d0ab767f6 | description: 'Queries data to/from a remote HTTP/HTTPS server. This engine is similar
to the File engine.'
sidebar_label: 'URL'
sidebar_position: 80
slug: /engines/table-engines/special/url
title: 'URL table engine'
doc_type: 'reference'
URL table engine
Queries data to/from a remote HTTP/HTTPS server. This engine is similar to the
File
engine.
Syntax:
URL(URL [,Format] [,CompressionMethod])
The
URL
parameter must conform to the structure of a Uniform Resource Locator. The specified URL must point to a server that uses HTTP or HTTPS. This does not require any additional headers for getting a response from the server.
The
Format
must be one that ClickHouse can use in
SELECT
queries and, if necessary, in
INSERTs
. For the full list of supported formats, see
Formats
.
If this argument is not specified, ClickHouse detects the format automatically from the suffix of the
URL
parameter. If the suffix of
URL
parameter does not match any supported formats, it fails to create table. For example, for engine expression
URL('http://localhost/test.json')
,
JSON
format is applied.
CompressionMethod
indicates that whether the HTTP body should be compressed. If the compression is enabled, the HTTP packets sent by the URL engine contain 'Content-Encoding' header to indicate which compression method is used.
To enable compression, please first make sure the remote HTTP endpoint indicated by the
URL
parameter supports corresponding compression algorithm.
The supported
CompressionMethod
should be one of following:
- gzip or gz
- deflate
- brotli or br
- lzma or xz
- zstd or zst
- lz4
- bz2
- snappy
- none
- auto
If
CompressionMethod
is not specified, it defaults to
auto
. This means ClickHouse detects compression method from the suffix of
URL
parameter automatically. If the suffix matches any of compression method listed above, corresponding compression is applied or there won't be any compression enabled.
For example, for engine expression
URL('http://localhost/test.gzip')
,
gzip
compression method is applied, but for
URL('http://localhost/test.fr')
, no compression is enabled because the suffix
fr
does not match any compression methods above.
Usage {#using-the-engine-in-the-clickhouse-server}
INSERT
and
SELECT
queries are transformed to
POST
and
GET
requests,
respectively. For processing
POST
requests, the remote server must support
Chunked transfer encoding
.
You can limit the maximum number of HTTP GET redirect hops using the
max_http_get_redirects
setting.
Example {#example}
1.
Create a
url_engine_table
table on the server :
sql
CREATE TABLE url_engine_table (word String, value UInt64)
ENGINE=URL('http://127.0.0.1:12345/', CSV)
2.
Create a basic HTTP server using the standard Python 3 tools and
start it:
```python3
from http.server import BaseHTTPRequestHandler, HTTPServer | {"source_file": "url.md"} | [
-0.04615458473563194,
0.013481481932103634,
-0.05140285566449165,
0.015682298690080643,
-0.035179246217012405,
-0.07134687155485153,
-0.06771577894687653,
-0.028783250600099564,
0.021913064643740654,
-0.03212068974971771,
0.012751477770507336,
-0.03306850045919418,
0.04040945693850517,
-0.... |
4823923f-2e77-4023-a862-532a77413092 | 2.
Create a basic HTTP server using the standard Python 3 tools and
start it:
```python3
from http.server import BaseHTTPRequestHandler, HTTPServer
class CSVHTTPServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/csv')
self.end_headers()
self.wfile.write(bytes('Hello,1\nWorld,2\n', "utf-8"))
if
name
== "
main
":
server_address = ('127.0.0.1', 12345)
HTTPServer(server_address, CSVHTTPServer).serve_forever()
```
bash
$ python3 server.py
3.
Request data:
sql
SELECT * FROM url_engine_table
text
ββwordβββ¬βvalueββ
β Hello β 1 β
β World β 2 β
βββββββββ΄ββββββββ
Details of Implementation {#details-of-implementation}
Reads and writes can be parallel
Not supported:
ALTER
and
SELECT...SAMPLE
operations.
Indexes.
Replication.
Virtual columns {#virtual-columns}
_path
β Path to the
URL
. Type:
LowCardinality(String)
.
_file
β Resource name of the
URL
. Type:
LowCardinality(String)
.
_size
β Size of the resource in bytes. Type:
Nullable(UInt64)
. If the size is unknown, the value is
NULL
.
_time
β Last modified time of the file. Type:
Nullable(DateTime)
. If the time is unknown, the value is
NULL
.
_headers
- HTTP response headers. Type:
Map(LowCardinality(String), LowCardinality(String))
.
Storage settings {#storage-settings}
engine_url_skip_empty_files
- allows to skip empty files while reading. Disabled by default.
enable_url_encoding
- allows to enable/disable decoding/encoding path in uri. Enabled by default. | {"source_file": "url.md"} | [
-0.028547994792461395,
-0.005544534418731928,
-0.09767764806747437,
0.028293317183852196,
-0.07870527356863022,
-0.17003187537193298,
-0.08022308349609375,
-0.021432951092720032,
0.0513799823820591,
0.02653372474014759,
-0.034474365413188934,
-0.019236605614423752,
0.030530354008078575,
-0... |
a1613b05-8a75-429e-ab75-6f2d75b9c854 | description: 'Used for implementing views (for more information, see the
CREATE VIEW
query
). It does not store data, but only stores the specified
SELECT
query. When
reading from a table, it runs this query (and deletes all unnecessary columns from
the query).'
sidebar_label: 'View'
sidebar_position: 90
slug: /engines/table-engines/special/view
title: 'View table engine'
doc_type: 'reference'
View table engine
Used for implementing views (for more information, see the
CREATE VIEW query
). It does not store data, but only stores the specified
SELECT
query. When reading from a table, it runs this query (and deletes all unnecessary columns from the query). | {"source_file": "view.md"} | [
-0.030238697305321693,
-0.003433461533859372,
-0.12142852693796158,
0.0755542442202568,
0.010212739929556847,
-0.005082013551145792,
0.02374734729528427,
-0.00896519236266613,
0.042319364845752716,
-0.00011805356189142913,
0.08634194731712341,
0.08035990595817566,
0.028466958552598953,
-0.... |
947ad861-7754-4557-bfd2-62f7acc81d1e | description: 'Tables with Distributed engine do not store any data of their own, but
allow distributed query processing on multiple servers. Reading is automatically
parallelized. During a read, the table indexes on remote servers are used, if there
are any.'
sidebar_label: 'Distributed'
sidebar_position: 10
slug: /engines/table-engines/special/distributed
title: 'Distributed table engine'
doc_type: 'reference'
Distributed table engine
:::warning Distributed engine in Cloud
To create a distributed table engine in ClickHouse Cloud, you can use the
remote
and
remoteSecure
table functions.
The
Distributed(...)
syntax cannot be used in ClickHouse Cloud.
:::
Tables with Distributed engine do not store any data of their own, but allow distributed query processing on multiple servers.
Reading is automatically parallelized. During a read, the table indexes on remote servers are used if they exist.
Creating a table {#distributed-creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = Distributed(cluster, database, table[, sharding_key[, policy_name]])
[SETTINGS name=value, ...]
From a table {#distributed-from-a-table}
When the
Distributed
table is pointing to a table on the current server you can adopt that table's schema:
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] AS [db2.]name2 ENGINE = Distributed(cluster, database, table[, sharding_key[, policy_name]]) [SETTINGS name=value, ...]
Distributed parameters {#distributed-parameters} | {"source_file": "distributed.md"} | [
0.01764528825879097,
-0.09097197651863098,
-0.09421694278717041,
0.07491155713796616,
-0.03772119805216789,
-0.00936034694314003,
-0.04615321382880211,
-0.030662525445222855,
-0.004116181284189224,
0.06874232739210129,
0.03417210280895233,
0.008165881969034672,
0.08379055559635162,
-0.0489... |
428080ac-e4c3-4270-af36-95d956652e6a | | Parameter | Description |
|---------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
cluster
| The cluster name in the server's config file |
|
database
| The name of a remote database |
|
table
| The name of a remote table |
|
sharding_key
(Optional) | The sharding key.
Specifying the
sharding_key
is necessary for the following:
For
INSERTs
into a distributed table (as the table engine needs the
sharding_key
to determine how to split the data). However, if
insert_distributed_one_random_shard
setting is enabled, then
INSERTs
do not need the sharding key.
For use with
optimize_skip_unused_shards
as the
sharding_key
is necessary to determine what shards should be queried
|
| | {"source_file": "distributed.md"} | [
0.010146044194698334,
0.0759410560131073,
-0.042988814413547516,
-0.007348221261054277,
-0.09025534242391586,
0.023905031383037567,
0.020156145095825195,
0.041932541877031326,
-0.0032095573842525482,
-0.06833912432193756,
0.041272617876529694,
-0.04904787614941597,
0.00046499198651872575,
... |
fc2c649f-4f1a-41e8-a0dc-b2138bce4d79 | setting is enabled, then
INSERTs
do not need the sharding key.
For use with
optimize_skip_unused_shards
as the
sharding_key
is necessary to determine what shards should be queried
|
|
policy_name
(Optional) | The policy name, it will be used to store temporary files for background send | | {"source_file": "distributed.md"} | [
0.06550765782594681,
0.020377065986394882,
-0.025157257914543152,
0.02769814059138298,
-0.015227245166897774,
-0.02406303584575653,
0.0032462312374264,
-0.01818067952990532,
0.012977993115782738,
0.0179306548088789,
0.06337010860443115,
0.027254123240709305,
0.07380200922489166,
-0.0112256... |
9b193ecc-a025-4397-b06f-7fc2dc819da0 | See Also
distributed_foreground_insert
setting
MergeTree
for the examples
Distributed settings {#distributed-settings} | {"source_file": "distributed.md"} | [
0.08136884868144989,
-0.10605370253324509,
-0.03218137100338936,
0.030841225758194923,
-0.0035628865007311106,
-0.05982562527060509,
-0.07067181169986725,
0.03900512307882309,
-0.06411867588758469,
0.02202809974551201,
0.07260877639055252,
-0.014390912838280201,
0.05316450074315071,
-0.063... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.