id
stringlengths
36
36
document
stringlengths
3
3k
metadata
stringlengths
23
69
embeddings
listlengths
384
384
1860dc78-e572-447f-9fd4-e303cfc16173
β”Œβ”€part_key─┬─value─┬─order─┬─frame_values─┐ β”‚ 1 β”‚ 1 β”‚ 1 β”‚ [1] β”‚ β”‚ 1 β”‚ 2 β”‚ 2 β”‚ [1,2] β”‚ β”‚ 1 β”‚ 3 β”‚ 3 β”‚ [1,2,3] β”‚ β”‚ 1 β”‚ 4 β”‚ 4 β”‚ [1,2,3,4] β”‚ β”‚ 1 β”‚ 5 β”‚ 5 β”‚ [1,2,3,4,5] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ``sql -- short form (frame is bounded by the beginning of a partition and the current row) -- an equalent of ORDER BY order ASC ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW` SELECT part_key, value, order, groupArray(value) OVER (PARTITION BY part_key ORDER BY order ASC) AS frame_values_short, groupArray(value) OVER (PARTITION BY part_key ORDER BY order ASC ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW ) AS frame_values FROM wf_frame ORDER BY part_key ASC, value ASC; β”Œβ”€part_key─┬─value─┬─order─┬─frame_values_short─┬─frame_values─┐ β”‚ 1 β”‚ 1 β”‚ 1 β”‚ [1] β”‚ [1] β”‚ β”‚ 1 β”‚ 2 β”‚ 2 β”‚ [1,2] β”‚ [1,2] β”‚ β”‚ 1 β”‚ 3 β”‚ 3 β”‚ [1,2,3] β”‚ [1,2,3] β”‚ β”‚ 1 β”‚ 4 β”‚ 4 β”‚ [1,2,3,4] β”‚ [1,2,3,4] β”‚ β”‚ 1 β”‚ 5 β”‚ 5 β”‚ [1,2,3,4,5] β”‚ [1,2,3,4,5] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ```sql -- frame is bounded by the beginning of a partition and the current row, but order is backward SELECT part_key, value, order, groupArray(value) OVER (PARTITION BY part_key ORDER BY order DESC) AS frame_values FROM wf_frame ORDER BY part_key ASC, value ASC; β”Œβ”€part_key─┬─value─┬─order─┬─frame_values─┐ β”‚ 1 β”‚ 1 β”‚ 1 β”‚ [5,4,3,2,1] β”‚ β”‚ 1 β”‚ 2 β”‚ 2 β”‚ [5,4,3,2] β”‚ β”‚ 1 β”‚ 3 β”‚ 3 β”‚ [5,4,3] β”‚ β”‚ 1 β”‚ 4 β”‚ 4 β”‚ [5,4] β”‚ β”‚ 1 β”‚ 5 β”‚ 5 β”‚ [5] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ```sql -- sliding frame - 1 PRECEDING ROW AND CURRENT ROW SELECT part_key, value, order, groupArray(value) OVER ( PARTITION BY part_key ORDER BY order ASC ROWS BETWEEN 1 PRECEDING AND CURRENT ROW ) AS frame_values FROM wf_frame ORDER BY part_key ASC, value ASC; β”Œβ”€part_key─┬─value─┬─order─┬─frame_values─┐ β”‚ 1 β”‚ 1 β”‚ 1 β”‚ [1] β”‚ β”‚ 1 β”‚ 2 β”‚ 2 β”‚ [1,2] β”‚ β”‚ 1 β”‚ 3 β”‚ 3 β”‚ [2,3] β”‚ β”‚ 1 β”‚ 4 β”‚ 4 β”‚ [3,4] β”‚ β”‚ 1 β”‚ 5 β”‚ 5 β”‚ [4,5] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ```sql -- sliding frame - ROWS BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING SELECT part_key, value, order, groupArray(value) OVER ( PARTITION BY part_key ORDER BY order ASC ROWS BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING ) AS frame_values FROM wf_frame ORDER BY part_key ASC, value ASC;
{"source_file": "index.md"}
[ 0.012349079363048077, -0.0446493998169899, 0.040256600826978683, -0.01382570993155241, 0.0019102233927696943, 0.01975097879767418, 0.0708380788564682, -0.037435486912727356, -0.019014203920960426, -0.002866913564503193, -0.01949196495115757, 0.0709318220615387, -0.05005841329693794, -0.025...
d9058381-72c9-4192-bc47-aa5ab5bc2c1a
β”Œβ”€part_key─┬─value─┬─order─┬─frame_values─┐ β”‚ 1 β”‚ 1 β”‚ 1 β”‚ [1,2,3,4,5] β”‚ β”‚ 1 β”‚ 2 β”‚ 2 β”‚ [1,2,3,4,5] β”‚ β”‚ 1 β”‚ 3 β”‚ 3 β”‚ [2,3,4,5] β”‚ β”‚ 1 β”‚ 4 β”‚ 4 β”‚ [3,4,5] β”‚ β”‚ 1 β”‚ 5 β”‚ 5 β”‚ [4,5] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ```sql -- row_number does not respect the frame, so rn_1 = rn_2 = rn_3 != rn_4 SELECT part_key, value, order, groupArray(value) OVER w1 AS frame_values, row_number() OVER w1 AS rn_1, sum(1) OVER w1 AS rn_2, row_number() OVER w2 AS rn_3, sum(1) OVER w2 AS rn_4 FROM wf_frame WINDOW w1 AS (PARTITION BY part_key ORDER BY order DESC), w2 AS ( PARTITION BY part_key ORDER BY order DESC ROWS BETWEEN 1 PRECEDING AND CURRENT ROW ) ORDER BY part_key ASC, value ASC; β”Œβ”€part_key─┬─value─┬─order─┬─frame_values─┬─rn_1─┬─rn_2─┬─rn_3─┬─rn_4─┐ β”‚ 1 β”‚ 1 β”‚ 1 β”‚ [5,4,3,2,1] β”‚ 5 β”‚ 5 β”‚ 5 β”‚ 2 β”‚ β”‚ 1 β”‚ 2 β”‚ 2 β”‚ [5,4,3,2] β”‚ 4 β”‚ 4 β”‚ 4 β”‚ 2 β”‚ β”‚ 1 β”‚ 3 β”‚ 3 β”‚ [5,4,3] β”‚ 3 β”‚ 3 β”‚ 3 β”‚ 2 β”‚ β”‚ 1 β”‚ 4 β”‚ 4 β”‚ [5,4] β”‚ 2 β”‚ 2 β”‚ 2 β”‚ 2 β”‚ β”‚ 1 β”‚ 5 β”‚ 5 β”‚ [5] β”‚ 1 β”‚ 1 β”‚ 1 β”‚ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ ``` ```sql -- first_value and last_value respect the frame SELECT groupArray(value) OVER w1 AS frame_values_1, first_value(value) OVER w1 AS first_value_1, last_value(value) OVER w1 AS last_value_1, groupArray(value) OVER w2 AS frame_values_2, first_value(value) OVER w2 AS first_value_2, last_value(value) OVER w2 AS last_value_2 FROM wf_frame WINDOW w1 AS (PARTITION BY part_key ORDER BY order ASC), w2 AS (PARTITION BY part_key ORDER BY order ASC ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) ORDER BY part_key ASC, value ASC; β”Œβ”€frame_values_1─┬─first_value_1─┬─last_value_1─┬─frame_values_2─┬─first_value_2─┬─last_value_2─┐ β”‚ [1] β”‚ 1 β”‚ 1 β”‚ [1] β”‚ 1 β”‚ 1 β”‚ β”‚ [1,2] β”‚ 1 β”‚ 2 β”‚ [1,2] β”‚ 1 β”‚ 2 β”‚ β”‚ [1,2,3] β”‚ 1 β”‚ 3 β”‚ [2,3] β”‚ 2 β”‚ 3 β”‚ β”‚ [1,2,3,4] β”‚ 1 β”‚ 4 β”‚ [3,4] β”‚ 3 β”‚ 4 β”‚ β”‚ [1,2,3,4,5] β”‚ 1 β”‚ 5 β”‚ [4,5] β”‚ 4 β”‚ 5 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ```sql -- second value within the frame SELECT groupArray(value) OVER w1 AS frame_values_1, nth_value(value, 2) OVER w1 AS second_value FROM wf_frame WINDOW w1 AS (PARTITION BY part_key ORDER BY order ASC ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) ORDER BY part_key ASC, value ASC;
{"source_file": "index.md"}
[ 0.016282519325613976, -0.08551058173179626, 0.017352726310491562, -0.0051254876889288425, 0.014854195527732372, 0.01949893683195114, 0.09510215371847153, -0.05508635565638542, -0.032722752541303635, -0.03089872933924198, -0.05828210711479187, 0.04006544500589371, -0.059375494718551636, -0....
360943be-7019-4c5d-9651-2e4b48caf2d4
β”Œβ”€frame_values_1─┬─second_value─┐ β”‚ [1] β”‚ 0 β”‚ β”‚ [1,2] β”‚ 2 β”‚ β”‚ [1,2,3] β”‚ 2 β”‚ β”‚ [1,2,3,4] β”‚ 2 β”‚ β”‚ [2,3,4,5] β”‚ 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ```sql -- second value within the frame + Null for missing values SELECT groupArray(value) OVER w1 AS frame_values_1, nth_value(toNullable(value), 2) OVER w1 AS second_value FROM wf_frame WINDOW w1 AS (PARTITION BY part_key ORDER BY order ASC ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) ORDER BY part_key ASC, value ASC; β”Œβ”€frame_values_1─┬─second_value─┐ β”‚ [1] β”‚ ᴺᡁᴸᴸ β”‚ β”‚ [1,2] β”‚ 2 β”‚ β”‚ [1,2,3] β”‚ 2 β”‚ β”‚ [1,2,3,4] β”‚ 2 β”‚ β”‚ [2,3,4,5] β”‚ 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` Real world examples {#real-world-examples} The following examples solve common real-world problems. Maximum/total salary per department {#maximumtotal-salary-per-department} ``sql CREATE TABLE employees ( department String, employee_name String, salary` Float ) ENGINE = Memory; INSERT INTO employees FORMAT Values ('Finance', 'Jonh', 200), ('Finance', 'Joan', 210), ('Finance', 'Jean', 505), ('IT', 'Tim', 200), ('IT', 'Anna', 300), ('IT', 'Elen', 500); ``` ``sql SELECT department, employee_name AS emp, salary, max_salary_per_dep, total_salary_per_dep, round((salary / total_salary_per_dep) * 100, 2) AS share_per_dep(%)` FROM ( SELECT department, employee_name, salary, max(salary) OVER wndw AS max_salary_per_dep, sum(salary) OVER wndw AS total_salary_per_dep FROM employees WINDOW wndw AS ( PARTITION BY department ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) ORDER BY department ASC, employee_name ASC ); β”Œβ”€department─┬─emp──┬─salary─┬─max_salary_per_dep─┬─total_salary_per_dep─┬─share_per_dep(%)─┐ β”‚ Finance β”‚ Jean β”‚ 505 β”‚ 505 β”‚ 915 β”‚ 55.19 β”‚ β”‚ Finance β”‚ Joan β”‚ 210 β”‚ 505 β”‚ 915 β”‚ 22.95 β”‚ β”‚ Finance β”‚ Jonh β”‚ 200 β”‚ 505 β”‚ 915 β”‚ 21.86 β”‚ β”‚ IT β”‚ Anna β”‚ 300 β”‚ 500 β”‚ 1000 β”‚ 30 β”‚ β”‚ IT β”‚ Elen β”‚ 500 β”‚ 500 β”‚ 1000 β”‚ 50 β”‚ β”‚ IT β”‚ Tim β”‚ 200 β”‚ 500 β”‚ 1000 β”‚ 20 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` Cumulative sum {#cumulative-sum} ``sql CREATE TABLE warehouse ( item String, ts DateTime, value` Float ) ENGINE = Memory
{"source_file": "index.md"}
[ -0.010763827711343765, -0.031320855021476746, -0.035279154777526855, 0.02220837213099003, -0.017308516427874565, -0.024483080953359604, 0.048685550689697266, -0.07929205894470215, -0.053136199712753296, -0.003054676577448845, 0.036088861525058746, -0.03730335831642151, -0.03312431648373604, ...
f95870b8-2e3c-41b9-84b9-df59eb47c0ab
Cumulative sum {#cumulative-sum} ``sql CREATE TABLE warehouse ( item String, ts DateTime, value` Float ) ENGINE = Memory INSERT INTO warehouse VALUES ('sku38', '2020-01-01', 9), ('sku38', '2020-02-01', 1), ('sku38', '2020-03-01', -4), ('sku1', '2020-01-01', 1), ('sku1', '2020-02-01', 1), ('sku1', '2020-03-01', 1); ``` ```sql SELECT item, ts, value, sum(value) OVER (PARTITION BY item ORDER BY ts ASC) AS stock_balance FROM warehouse ORDER BY item ASC, ts ASC; β”Œβ”€item──┬──────────────────ts─┬─value─┬─stock_balance─┐ β”‚ sku1 β”‚ 2020-01-01 00:00:00 β”‚ 1 β”‚ 1 β”‚ β”‚ sku1 β”‚ 2020-02-01 00:00:00 β”‚ 1 β”‚ 2 β”‚ β”‚ sku1 β”‚ 2020-03-01 00:00:00 β”‚ 1 β”‚ 3 β”‚ β”‚ sku38 β”‚ 2020-01-01 00:00:00 β”‚ 9 β”‚ 9 β”‚ β”‚ sku38 β”‚ 2020-02-01 00:00:00 β”‚ 1 β”‚ 10 β”‚ β”‚ sku38 β”‚ 2020-03-01 00:00:00 β”‚ -4 β”‚ 6 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` Moving / sliding average (per 3 rows) {#moving--sliding-average-per-3-rows} ``sql CREATE TABLE sensors ( metric String, ts DateTime, value` Float ) ENGINE = Memory; insert into sensors values('cpu_temp', '2020-01-01 00:00:00', 87), ('cpu_temp', '2020-01-01 00:00:01', 77), ('cpu_temp', '2020-01-01 00:00:02', 93), ('cpu_temp', '2020-01-01 00:00:03', 87), ('cpu_temp', '2020-01-01 00:00:04', 87), ('cpu_temp', '2020-01-01 00:00:05', 87), ('cpu_temp', '2020-01-01 00:00:06', 87), ('cpu_temp', '2020-01-01 00:00:07', 87); ``` ```sql SELECT metric, ts, value, avg(value) OVER ( PARTITION BY metric ORDER BY ts ASC ROWS BETWEEN 2 PRECEDING AND CURRENT ROW ) AS moving_avg_temp FROM sensors ORDER BY metric ASC, ts ASC; β”Œβ”€metric───┬──────────────────ts─┬─value─┬───moving_avg_temp─┐ β”‚ cpu_temp β”‚ 2020-01-01 00:00:00 β”‚ 87 β”‚ 87 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:00:01 β”‚ 77 β”‚ 82 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:00:02 β”‚ 93 β”‚ 85.66666666666667 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:00:03 β”‚ 87 β”‚ 85.66666666666667 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:00:04 β”‚ 87 β”‚ 89 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:00:05 β”‚ 87 β”‚ 87 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:00:06 β”‚ 87 β”‚ 87 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:00:07 β”‚ 87 β”‚ 87 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` Moving / sliding average (per 10 seconds) {#moving--sliding-average-per-10-seconds} ```sql SELECT metric, ts, value, avg(value) OVER (PARTITION BY metric ORDER BY ts RANGE BETWEEN 10 PRECEDING AND CURRENT ROW) AS moving_avg_10_seconds_temp FROM sensors ORDER BY metric ASC, ts ASC;
{"source_file": "index.md"}
[ -0.013832993805408478, 0.005976438522338867, -0.00478363549336791, 0.04995444416999817, -0.09881352633237839, 0.03664303943514824, 0.04177113249897957, 0.0020085040014237165, 0.017429064959287643, 0.049956005066633224, 0.06460760533809662, -0.07806475460529327, 0.01485387422144413, -0.0135...
91f683d6-4bd4-4b07-8aff-fcaf66cfe3ba
β”Œβ”€metric───┬──────────────────ts─┬─value─┬─moving_avg_10_seconds_temp─┐ β”‚ cpu_temp β”‚ 2020-01-01 00:00:00 β”‚ 87 β”‚ 87 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:01:10 β”‚ 77 β”‚ 77 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:02:20 β”‚ 93 β”‚ 93 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:03:30 β”‚ 87 β”‚ 87 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:04:40 β”‚ 87 β”‚ 87 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:05:50 β”‚ 87 β”‚ 87 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:06:00 β”‚ 87 β”‚ 87 β”‚ β”‚ cpu_temp β”‚ 2020-01-01 00:07:10 β”‚ 87 β”‚ 87 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` Moving / sliding average (per 10 days) {#moving--sliding-average-per-10-days} Temperature is stored with second precision, but using Range and ORDER BY toDate(ts) we form a frame with the size of 10 units, and because of toDate(ts) the unit is a day. ``sql CREATE TABLE sensors ( metric String, ts DateTime, value` Float ) ENGINE = Memory; insert into sensors values('ambient_temp', '2020-01-01 00:00:00', 16), ('ambient_temp', '2020-01-01 12:00:00', 16), ('ambient_temp', '2020-01-02 11:00:00', 9), ('ambient_temp', '2020-01-02 12:00:00', 9), ('ambient_temp', '2020-02-01 10:00:00', 10), ('ambient_temp', '2020-02-01 12:00:00', 10), ('ambient_temp', '2020-02-10 12:00:00', 12), ('ambient_temp', '2020-02-10 13:00:00', 12), ('ambient_temp', '2020-02-20 12:00:01', 16), ('ambient_temp', '2020-03-01 12:00:00', 16), ('ambient_temp', '2020-03-01 12:00:00', 16), ('ambient_temp', '2020-03-01 12:00:00', 16); ``` ```sql SELECT metric, ts, value, round(avg(value) OVER (PARTITION BY metric ORDER BY toDate(ts) RANGE BETWEEN 10 PRECEDING AND CURRENT ROW),2) AS moving_avg_10_days_temp FROM sensors ORDER BY metric ASC, ts ASC;
{"source_file": "index.md"}
[ -0.025427548214793205, -0.04846914857625961, -0.003163822228088975, 0.0718066394329071, -0.02259908616542816, -0.027835531160235405, 0.055639129132032394, 0.013055290095508099, -0.020274575799703598, 0.019693640992045403, 0.018731215968728065, -0.0796278566122055, 0.0038821145426481962, 0....
275ca35d-9b66-4cc5-94b7-2a9ff044cd91
β”Œβ”€metric───────┬──────────────────ts─┬─value─┬─moving_avg_10_days_temp─┐ β”‚ ambient_temp β”‚ 2020-01-01 00:00:00 β”‚ 16 β”‚ 16 β”‚ β”‚ ambient_temp β”‚ 2020-01-01 12:00:00 β”‚ 16 β”‚ 16 β”‚ β”‚ ambient_temp β”‚ 2020-01-02 11:00:00 β”‚ 9 β”‚ 12.5 β”‚ β”‚ ambient_temp β”‚ 2020-01-02 12:00:00 β”‚ 9 β”‚ 12.5 β”‚ β”‚ ambient_temp β”‚ 2020-02-01 10:00:00 β”‚ 10 β”‚ 10 β”‚ β”‚ ambient_temp β”‚ 2020-02-01 12:00:00 β”‚ 10 β”‚ 10 β”‚ β”‚ ambient_temp β”‚ 2020-02-10 12:00:00 β”‚ 12 β”‚ 11 β”‚ β”‚ ambient_temp β”‚ 2020-02-10 13:00:00 β”‚ 12 β”‚ 11 β”‚ β”‚ ambient_temp β”‚ 2020-02-20 12:00:01 β”‚ 16 β”‚ 13.33 β”‚ β”‚ ambient_temp β”‚ 2020-03-01 12:00:00 β”‚ 16 β”‚ 16 β”‚ β”‚ ambient_temp β”‚ 2020-03-01 12:00:00 β”‚ 16 β”‚ 16 β”‚ β”‚ ambient_temp β”‚ 2020-03-01 12:00:00 β”‚ 16 β”‚ 16 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` References {#references} GitHub Issues {#github-issues} The roadmap for the initial support of window functions is in this issue . All GitHub issues related to window functions have the comp-window-functions tag. Tests {#tests} These tests contain the examples of the currently supported grammar: https://github.com/ClickHouse/ClickHouse/blob/master/tests/performance/window_functions.xml https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/01591_window_functions.sql Postgres Docs {#postgres-docs} https://www.postgresql.org/docs/current/sql-select.html#SQL-WINDOW https://www.postgresql.org/docs/devel/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS https://www.postgresql.org/docs/devel/functions-window.html https://www.postgresql.org/docs/devel/tutorial-window.html MySQL Docs {#mysql-docs} https://dev.mysql.com/doc/refman/8.0/en/window-function-descriptions.html https://dev.mysql.com/doc/refman/8.0/en/window-functions-usage.html https://dev.mysql.com/doc/refman/8.0/en/window-functions-frames.html Related Content {#related-content} Blog: Working with time series data in ClickHouse Blog: Window and array functions for Git commit sequences Blog: Getting Data Into ClickHouse - Part 3 - Using S3
{"source_file": "index.md"}
[ -0.004341804422438145, 0.01721995323896408, 0.012105722911655903, 0.07331844419240952, 0.05071019381284714, -0.07615388184785843, 0.08058800548315048, -0.0732923224568367, -0.01361676212400198, 0.04970552772283554, 0.07325533777475357, -0.06511416286230087, 0.041638221591711044, -0.0010036...
6daf3eac-b155-4b20-8d8d-d790bfd7ff29
description: 'Documentation for the dense_rank window function' sidebar_label: 'dense_rank' sidebar_position: 7 slug: /sql-reference/window-functions/dense_rank title: 'dense_rank' doc_type: 'reference' dense_rank Ranks the current row within its partition without gaps. In other words, if the value of any new row encountered is equal to the value of one of the previous rows then it will receive the next successive rank without any gaps in ranking. The rank function provides the same behaviour, but with gaps in ranking. Syntax Alias: denseRank (case-sensitive) sql dense_rank () OVER ([[PARTITION BY grouping_column] [ORDER BY sorting_column] [ROWS or RANGE expression_to_bound_rows_withing_the_group]] | [window_name]) FROM table_name WINDOW window_name as ([[PARTITION BY grouping_column] [ORDER BY sorting_column]) For more detail on window function syntax see: Window Functions - Syntax . Returned value A number for the current row within its partition, without gaps in ranking. UInt64 . Example The following example is based on the example provided in the video instructional Ranking window functions in ClickHouse . Query: ``sql CREATE TABLE salaries ( team String, player String, salary UInt32, position` String ) Engine = Memory; INSERT INTO salaries FORMAT Values ('Port Elizabeth Barbarians', 'Gary Chen', 195000, 'F'), ('New Coreystad Archdukes', 'Charles Juarez', 190000, 'F'), ('Port Elizabeth Barbarians', 'Michael Stanley', 150000, 'D'), ('New Coreystad Archdukes', 'Scott Harrison', 150000, 'D'), ('Port Elizabeth Barbarians', 'Robert George', 195000, 'M'), ('South Hampton Seagulls', 'Douglas Benson', 150000, 'M'), ('South Hampton Seagulls', 'James Henderson', 140000, 'M'); ``` sql SELECT player, salary, dense_rank() OVER (ORDER BY salary DESC) AS dense_rank FROM salaries; Result: response β”Œβ”€player──────────┬─salary─┬─dense_rank─┐ 1. β”‚ Gary Chen β”‚ 195000 β”‚ 1 β”‚ 2. β”‚ Robert George β”‚ 195000 β”‚ 1 β”‚ 3. β”‚ Charles Juarez β”‚ 190000 β”‚ 2 β”‚ 4. β”‚ Michael Stanley β”‚ 150000 β”‚ 3 β”‚ 5. β”‚ Douglas Benson β”‚ 150000 β”‚ 3 β”‚ 6. β”‚ Scott Harrison β”‚ 150000 β”‚ 3 β”‚ 7. β”‚ James Henderson β”‚ 140000 β”‚ 4 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
{"source_file": "dense_rank.md"}
[ -0.05243071913719177, -0.0897602066397667, -0.0051957955583930016, 0.0050953649915754795, -0.02136785164475441, 0.0363246351480484, 0.027378074824810028, 0.018861569464206696, 0.010826138779520988, -0.023209411650896072, -0.015241231769323349, -0.0003787070163525641, 0.02794739231467247, -...
509e244d-e8de-4a82-98e3-1da62e2b16dc
description: 'Documentation for the row_number window function' sidebar_label: 'row_number' sidebar_position: 2 slug: /sql-reference/window-functions/row_number title: 'row_number' doc_type: 'reference' row_number Numbers the current row within its partition starting from 1. Syntax sql row_number (column_name) OVER ([[PARTITION BY grouping_column] [ORDER BY sorting_column] [ROWS or RANGE expression_to_bound_rows_withing_the_group]] | [window_name]) FROM table_name WINDOW window_name as ([[PARTITION BY grouping_column] [ORDER BY sorting_column]) For more detail on window function syntax see: Window Functions - Syntax . Returned value A number for the current row within its partition. UInt64 . Example The following example is based on the example provided in the video instructional Ranking window functions in ClickHouse . Query: ``sql CREATE TABLE salaries ( team String, player String, salary UInt32, position` String ) Engine = Memory; INSERT INTO salaries FORMAT Values ('Port Elizabeth Barbarians', 'Gary Chen', 195000, 'F'), ('New Coreystad Archdukes', 'Charles Juarez', 190000, 'F'), ('Port Elizabeth Barbarians', 'Michael Stanley', 150000, 'D'), ('New Coreystad Archdukes', 'Scott Harrison', 150000, 'D'), ('Port Elizabeth Barbarians', 'Robert George', 195000, 'M'); ``` sql SELECT player, salary, row_number() OVER (ORDER BY salary DESC) AS row_number FROM salaries; Result: response β”Œβ”€player──────────┬─salary─┬─row_number─┐ 1. β”‚ Gary Chen β”‚ 195000 β”‚ 1 β”‚ 2. β”‚ Robert George β”‚ 195000 β”‚ 2 β”‚ 3. β”‚ Charles Juarez β”‚ 190000 β”‚ 3 β”‚ 4. β”‚ Scott Harrison β”‚ 150000 β”‚ 4 β”‚ 5. β”‚ Michael Stanley β”‚ 150000 β”‚ 5 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
{"source_file": "row_number.md"}
[ -0.01916847564280033, 0.044608913362026215, -0.05975934863090515, -0.018879586830735207, -0.09625446051359177, 0.0571003183722496, 0.036231402307748795, 0.056070055812597275, -0.017153291031718254, -0.02425803802907467, -0.03294530510902405, 0.03139081597328186, 0.026768159121274948, -0.05...
d56f3b90-182c-4726-b6cf-6b70d5cace83
description: 'Documentation for the cume_dist window function' sidebar_label: 'cume_dist' sidebar_position: 11 slug: /sql-reference/window-functions/cume_dist title: 'cume_dist' doc_type: 'reference' cume_dist Computes the cumulative distribution of a value within a group of values, i.e., the percentage of rows with values less than or equal to the current row's value. Can be used to determine relative standing of a value within a partition. Syntax sql cume_dist () OVER ([[PARTITION BY grouping_column] [ORDER BY sorting_column] [RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING]] | [window_name]) FROM table_name WINDOW window_name as ([PARTITION BY grouping_column] [ORDER BY sorting_column] RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) The default and required window frame definition is RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING . For more detail on window function syntax see: Window Functions - Syntax . Returned value The relative rank of the current row. The return type is Float64 in the range [0, 1]. Float64 . Example The following example calculates the cumulative distribution of salaries within a team: Query: ``sql CREATE TABLE salaries ( team String, player String, salary UInt32, position` String ) Engine = Memory; INSERT INTO salaries FORMAT Values ('Port Elizabeth Barbarians', 'Gary Chen', 195000, 'F'), ('New Coreystad Archdukes', 'Charles Juarez', 190000, 'F'), ('Port Elizabeth Barbarians', 'Michael Stanley', 150000, 'D'), ('New Coreystad Archdukes', 'Scott Harrison', 150000, 'D'), ('Port Elizabeth Barbarians', 'Robert George', 195000, 'M'), ('South Hampton Seagulls', 'Douglas Benson', 150000, 'M'), ('South Hampton Seagulls', 'James Henderson', 140000, 'M'); ``` sql SELECT player, salary, cume_dist() OVER (ORDER BY salary DESC) AS cume_dist FROM salaries; Result: response β”Œβ”€player──────────┬─salary─┬───────────cume_dist─┐ 1. β”‚ Robert George β”‚ 195000 β”‚ 0.2857142857142857 β”‚ 2. β”‚ Gary Chen β”‚ 195000 β”‚ 0.2857142857142857 β”‚ 3. β”‚ Charles Juarez β”‚ 190000 β”‚ 0.42857142857142855 β”‚ 4. β”‚ Douglas Benson β”‚ 150000 β”‚ 0.8571428571428571 β”‚ 5. β”‚ Michael Stanley β”‚ 150000 β”‚ 0.8571428571428571 β”‚ 6. β”‚ Scott Harrison β”‚ 150000 β”‚ 0.8571428571428571 β”‚ 7. β”‚ James Henderson β”‚ 140000 β”‚ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Implementation Details The cume_dist() function calculates the relative position using the following formula: text cume_dist = (number of rows ≀ current row value) / (total number of rows in partition) Rows with equal values (peers) receive the same cumulative distribution value, which corresponds to the highest position of the peer group.
{"source_file": "cume_dist.md"}
[ -0.06851772218942642, -0.01463402807712555, -0.015693424269557, -0.009091964922845364, -0.08952480554580688, 0.03187347576022148, 0.05636092647910118, 0.09452922642230988, 0.003532717702910304, -0.009264333173632622, -0.03389905020594597, -0.05719925835728645, 0.0021180466283112764, -0.028...
ed98a5a1-36a1-4405-abd8-23d1d28e2047
description: 'Documentation for the nth_value window function' sidebar_label: 'nth_value' sidebar_position: 5 slug: /sql-reference/window-functions/nth_value title: 'nth_value' doc_type: 'reference' nth_value Returns the first non-NULL value evaluated against the nth row (offset) in its ordered frame. Syntax sql nth_value (x, offset) OVER ([[PARTITION BY grouping_column] [ORDER BY sorting_column] [ROWS or RANGE expression_to_bound_rows_withing_the_group]] | [window_name]) FROM table_name WINDOW window_name as ([[PARTITION BY grouping_column] [ORDER BY sorting_column]) For more detail on window function syntax see: Window Functions - Syntax . Parameters x β€” Column name. offset β€” nth row to evaluate current row against. Returned value The first non-NULL value evaluated against the nth row (offset) in its ordered frame. Example In this example the nth-value function is used to find the third-highest salary from a fictional dataset of salaries of Premier League football players. Query: ``sql DROP TABLE IF EXISTS salaries; CREATE TABLE salaries ( team String, player String, salary UInt32, position` String ) Engine = Memory; INSERT INTO salaries FORMAT Values ('Port Elizabeth Barbarians', 'Gary Chen', 195000, 'F'), ('New Coreystad Archdukes', 'Charles Juarez', 190000, 'F'), ('Port Elizabeth Barbarians', 'Michael Stanley', 100000, 'D'), ('New Coreystad Archdukes', 'Scott Harrison', 180000, 'D'), ('Port Elizabeth Barbarians', 'Robert George', 195000, 'M'), ('South Hampton Seagulls', 'Douglas Benson', 150000, 'M'), ('South Hampton Seagulls', 'James Henderson', 140000, 'M'); ``` sql SELECT player, salary, nth_value(player,3) OVER(ORDER BY salary DESC) AS third_highest_salary FROM salaries; Result: response β”Œβ”€player──────────┬─salary─┬─third_highest_salary─┐ 1. β”‚ Gary Chen β”‚ 195000 β”‚ β”‚ 2. β”‚ Robert George β”‚ 195000 β”‚ β”‚ 3. β”‚ Charles Juarez β”‚ 190000 β”‚ Charles Juarez β”‚ 4. β”‚ Scott Harrison β”‚ 180000 β”‚ Charles Juarez β”‚ 5. β”‚ Douglas Benson β”‚ 150000 β”‚ Charles Juarez β”‚ 6. β”‚ James Henderson β”‚ 140000 β”‚ Charles Juarez β”‚ 7. β”‚ Michael Stanley β”‚ 100000 β”‚ Charles Juarez β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
{"source_file": "nth_value.md"}
[ -0.03357956185936928, 0.027750549837946892, -0.07031392306089401, -0.005302912089973688, -0.07501791417598724, 0.05053706839680672, 0.06817003339529037, 0.03538312390446663, 0.023373009636998177, -0.03495379909873009, -0.007525269873440266, -0.025051603093743324, -0.02989855408668518, -0.0...
36d9eee5-a2ce-4932-8ecf-0891f6dec750
description: 'Enables simultaneous processing of files matching a specified path across multiple nodes within a cluster. The initiator establishes connections to worker nodes, expands globs in the file path, and delegates file-reading tasks to worker nodes. Each worker node is querying the initiator for the next file to process, repeating until all tasks are completed (all files are read).' sidebar_label: 'fileCluster' sidebar_position: 61 slug: /sql-reference/table-functions/fileCluster title: 'fileCluster' doc_type: 'reference' fileCluster Table Function Enables simultaneous processing of files matching a specified path across multiple nodes within a cluster. The initiator establishes connections to worker nodes, expands globs in the file path, and delegates file-reading tasks to worker nodes. Each worker node is querying the initiator for the next file to process, repeating until all tasks are completed (all files are read). :::note This function will operate correctly only in case the set of files matching the initially specified path is identical across all nodes, and their content is consistent among different nodes. In case these files differ between nodes, the return value cannot be predetermined and depends on the order in which worker nodes request tasks from the initiator. ::: Syntax {#syntax} sql fileCluster(cluster_name, path[, format, structure, compression_method]) Arguments {#arguments} | Argument | Description | |----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | cluster_name | Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers. | | path | The relative path to the file from user_files_path . Path to file also supports globs . | | format | Format of the files. Type: String . | | structure | Table structure in 'UserID UInt64, Name String' format. Determines column names and types. Type: String . | | compression_method | Compression method. Supported compression types are gz , br , xz , zst , lz4 , and bz2 . | Returned value {#returned_value} A table with the specified format and structure and with data from files matching the specified path. Example Given a cluster named my_cluster and given the following value of setting user_files_path :
{"source_file": "fileCluster.md"}
[ -0.06657062470912933, -0.0239312332123518, -0.02766396850347519, 0.05324166640639305, 0.03847113251686096, -0.07854574173688889, 0.02696068398654461, 0.031622081995010376, 0.020290229469537735, 0.016088636592030525, 0.032038044184446335, 0.002287826035171747, 0.06156422197818756, -0.057325...
3e67b1a5-7ddd-4948-a42b-cd63e041d427
Example Given a cluster named my_cluster and given the following value of setting user_files_path : bash $ grep user_files_path /etc/clickhouse-server/config.xml <user_files_path>/var/lib/clickhouse/user_files/</user_files_path> Also, given there are files test1.csv and test2.csv inside user_files_path of each cluster node, and their content is identical across different nodes: ```bash $ cat /var/lib/clickhouse/user_files/test1.csv 1,"file1" 11,"file11" $ cat /var/lib/clickhouse/user_files/test2.csv 2,"file2" 22,"file22" ``` For example, one can create these files by executing these two queries on every cluster node: sql INSERT INTO TABLE FUNCTION file('file1.csv', 'CSV', 'i UInt32, s String') VALUES (1,'file1'), (11,'file11'); INSERT INTO TABLE FUNCTION file('file2.csv', 'CSV', 'i UInt32, s String') VALUES (2,'file2'), (22,'file22'); Now, read data contents of test1.csv and test2.csv via fileCluster table function: sql SELECT * FROM fileCluster('my_cluster', 'file{1,2}.csv', 'CSV', 'i UInt32, s String') ORDER BY i, s response β”Œβ”€β”€i─┬─s──────┐ β”‚ 1 β”‚ file1 β”‚ β”‚ 11 β”‚ file11 β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€i─┬─s──────┐ β”‚ 2 β”‚ file2 β”‚ β”‚ 22 β”‚ file22 β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Globs in Path {#globs-in-path} All patterns supported by File table function are supported by FileCluster. Related {#related} File table function
{"source_file": "fileCluster.md"}
[ 0.012360526248812675, -0.03346109390258789, -0.09512187540531158, 0.04724754020571709, -0.01724797673523426, -0.03794398158788681, 0.08505331724882126, 0.01649010367691517, 0.032704394310712814, -0.014361398294568062, 0.06558898836374283, -0.04866113141179085, 0.09150639921426773, -0.07904...
29bc9f4b-792a-4bb7-b6de-22a24e8741f1
description: 'Table function that allows effectively converting and inserting data sent to the server with a given structure to a table with another structure.' sidebar_label: 'input' sidebar_position: 95 slug: /sql-reference/table-functions/input title: 'input' doc_type: 'reference' input Table Function input(structure) - table function that allows effectively converting and inserting data sent to the server with a given structure to a table with another structure. structure - structure of data sent to the server in following format 'column1_name column1_type, column2_name column2_type, ...' . For example, 'id UInt32, name String' . This function can be used only in INSERT SELECT query and only once but otherwise behaves like ordinary table function (for example, it can be used in subquery, etc.). Data can be sent in any way like for ordinary INSERT query and passed in any available format that must be specified in the end of query (unlike ordinary INSERT SELECT ). The main feature of this function is that when server receives data from client it simultaneously converts it according to the list of expressions in the SELECT clause and inserts into the target table. Temporary table with all transferred data is not created. Examples {#examples} Let the test table has the following structure (a String, b String) and data in data.csv has a different structure (col1 String, col2 Date, col3 Int32) . Query for insert data from the data.csv into the test table with simultaneous conversion looks like this: bash $ cat data.csv | clickhouse-client --query="INSERT INTO test SELECT lower(col1), col3 * col3 FROM input('col1 String, col2 Date, col3 Int32') FORMAT CSV"; If data.csv contains data of the same structure test_structure as the table test then these two queries are equal: bash $ cat data.csv | clickhouse-client --query="INSERT INTO test FORMAT CSV" $ cat data.csv | clickhouse-client --query="INSERT INTO test SELECT * FROM input('test_structure') FORMAT CSV"
{"source_file": "input.md"}
[ -0.03147922456264496, -0.013853289186954498, -0.07487823069095612, 0.05588337406516075, -0.09518466889858246, -0.047282807528972626, 0.04010646790266037, 0.057783614844083786, 0.011524813249707222, 0.006877307780086994, 0.052180882543325424, -0.013473987579345703, 0.1093912199139595, -0.07...
2a5441ee-0131-48fa-96d0-3c80bed0ab78
description: 'Provides a read-only table-like interface to Apache Iceberg tables in Amazon S3, Azure, HDFS or locally stored.' sidebar_label: 'iceberg' sidebar_position: 90 slug: /sql-reference/table-functions/iceberg title: 'iceberg' doc_type: 'reference' iceberg Table Function {#iceberg-table-function} Provides a read-only table-like interface to Apache Iceberg tables in Amazon S3, Azure, HDFS or locally stored. Syntax {#syntax} ```sql icebergS3(url [, NOSIGN | access_key_id, secret_access_key, [session_token]] [,format] [,compression_method]) icebergS3(named_collection[, option=value [,..]]) icebergAzure(connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method]) icebergAzure(named_collection[, option=value [,..]]) icebergHDFS(path_to_table, [,format] [,compression_method]) icebergHDFS(named_collection[, option=value [,..]]) icebergLocal(path_to_table, [,format] [,compression_method]) icebergLocal(named_collection[, option=value [,..]]) ``` Arguments {#arguments} Description of the arguments coincides with description of arguments in table functions s3 , azureBlobStorage , HDFS and file correspondingly. format stands for the format of data files in the Iceberg table. Returned value {#returned-value} A table with the specified structure for reading data in the specified Iceberg table. Example {#example} sql SELECT * FROM icebergS3('http://test.s3.amazonaws.com/clickhouse-bucket/test_table', 'test', 'test') :::important ClickHouse currently supports reading v1 and v2 of the Iceberg format via the icebergS3 , icebergAzure , icebergHDFS and icebergLocal table functions and IcebergS3 , icebergAzure , IcebergHDFS and IcebergLocal table engines. ::: Defining a named collection {#defining-a-named-collection} Here is an example of configuring a named collection for storing the URL and credentials: xml <clickhouse> <named_collections> <iceberg_conf> <url>http://test.s3.amazonaws.com/clickhouse-bucket/</url> <access_key_id>test<access_key_id> <secret_access_key>test</secret_access_key> <format>auto</format> <structure>auto</structure> </iceberg_conf> </named_collections> </clickhouse> sql SELECT * FROM icebergS3(iceberg_conf, filename = 'test_table') DESCRIBE icebergS3(iceberg_conf, filename = 'test_table') Schema Evolution {#schema-evolution} At the moment, with the help of CH, you can read iceberg tables, the schema of which has changed over time. We currently support reading tables where columns have been added and removed, and their order has changed. You can also change a column where a value is required to one where NULL is allowed. Additionally, we support permitted type casting for simple types, namely: Β  int -> long float -> double decimal(P, S) -> decimal(P', S) where P' > P.
{"source_file": "iceberg.md"}
[ -0.012690648436546326, -0.016467751935124397, -0.14149783551692963, 0.06926766037940979, 0.014285367913544178, -0.006995185744017363, -0.016246207058429718, 0.031326670199632645, -0.010888038203120232, 0.0501769557595253, 0.013390609063208103, 0.0033476664684712887, 0.11500526964664459, -0...
4c17422a-399d-4b3c-adfa-2a36fc02d248
int -> long float -> double decimal(P, S) -> decimal(P', S) where P' > P. Currently, it is not possible to change nested structures or the types of elements within arrays and maps. Partition Pruning {#partition-pruning} ClickHouse supports partition pruning during SELECT queries for Iceberg tables, which helps optimize query performance by skipping irrelevant data files. To enable partition pruning, set use_iceberg_partition_pruning = 1 . For more information about iceberg partition pruning address https://iceberg.apache.org/spec/#partitioning Time Travel {#time-travel} ClickHouse supports time travel for Iceberg tables, allowing you to query historical data with a specific timestamp or snapshot ID. Processing of tables with deleted rows {#deleted-rows} Currently, only Iceberg tables with position deletes are supported. The following deletion methods are not supported : - Equality deletes - Deletion vectors (introduced in v3) Basic usage {#basic-usage} sql SELECT * FROM example_table ORDER BY 1 SETTINGS iceberg_timestamp_ms = 1714636800000 sql SELECT * FROM example_table ORDER BY 1 SETTINGS iceberg_snapshot_id = 3547395809148285433 Note: You cannot specify both iceberg_timestamp_ms and iceberg_snapshot_id parameters in the same query. Important considerations {#important-considerations} Snapshots are typically created when: New data is written to the table Some kind of data compaction is performed Schema changes typically don't create snapshots - This leads to important behaviors when using time travel with tables that have undergone schema evolution. Example scenarios {#example-scenarios} All scenarios are written in Spark because CH doesn't support writing to Iceberg tables yet. Scenario 1: Schema Changes Without New Snapshots {#scenario-1} Consider this sequence of operations: ```sql -- Create a table with two columns CREATE TABLE IF NOT EXISTS spark_catalog.db.time_travel_example ( order_number bigint, product_code string ) USING iceberg OPTIONS ('format-version'='2') Insert data into the table INSERT INTO spark_catalog.db.time_travel_example VALUES (1, 'Mars') ts1 = now() // A piece of pseudo code Alter table to add a new column ALTER TABLE spark_catalog.db.time_travel_example ADD COLUMN (price double) ts2 = now() Insert data into the table INSERT INTO spark_catalog.db.time_travel_example VALUES (2, 'Venus', 100) ts3 = now() Query the table at each timestamp SELECT * FROM spark_catalog.db.time_travel_example TIMESTAMP AS OF ts1; +------------+------------+ |order_number|product_code| +------------+------------+ | 1| Mars| +------------+------------+ SELECT * FROM spark_catalog.db.time_travel_example TIMESTAMP AS OF ts2;
{"source_file": "iceberg.md"}
[ -0.021178212016820908, 0.0275813527405262, -0.03706058859825134, 0.0010537075577303767, 0.017069343477487564, -0.04461333528161049, -0.02881893701851368, 0.007657716516405344, -0.04183351993560791, -0.015122178941965103, -0.002029771450906992, 0.03543801233172417, -0.008615626022219658, 0....
65055a82-bb09-4500-a207-47d6797553db
+------------+------------+ |order_number|product_code| +------------+------------+ | 1| Mars| +------------+------------+ SELECT * FROM spark_catalog.db.time_travel_example TIMESTAMP AS OF ts3; +------------+------------+-----+ |order_number|product_code|price| +------------+------------+-----+ | 1| Mars| NULL| | 2| Venus|100.0| +------------+------------+-----+ ``` Query results at different timestamps: At ts1 & ts2: Only the original two columns appear At ts3: All three columns appear, with NULL for the price of the first row Scenario 2: Historical vs. Current Schema Differences {#scenario-2} A time travel query at a current moment might show a different schema than the current table: ```sql -- Create a table CREATE TABLE IF NOT EXISTS spark_catalog.db.time_travel_example_2 ( order_number bigint, product_code string ) USING iceberg OPTIONS ('format-version'='2') -- Insert initial data into the table INSERT INTO spark_catalog.db.time_travel_example_2 VALUES (2, 'Venus'); -- Alter table to add a new column ALTER TABLE spark_catalog.db.time_travel_example_2 ADD COLUMN (price double); ts = now(); -- Query the table at a current moment but using timestamp syntax SELECT * FROM spark_catalog.db.time_travel_example_2 TIMESTAMP AS OF ts; +------------+------------+ |order_number|product_code| +------------+------------+ | 2| Venus| +------------+------------+ -- Query the table at a current moment SELECT * FROM spark_catalog.db.time_travel_example_2; +------------+------------+-----+ |order_number|product_code|price| +------------+------------+-----+ | 2| Venus| NULL| +------------+------------+-----+ ``` This happens because ALTER TABLE doesn't create a new snapshot but for the current table Spark takes value of schema_id from the latest metadata file, not a snapshot. Scenario 3: Historical vs. Current Schema Differences {#scenario-3} The second one is that while doing time travel you can't get state of table before any data was written to it: ```sql -- Create a table CREATE TABLE IF NOT EXISTS spark_catalog.db.time_travel_example_3 ( order_number bigint, product_code string ) USING iceberg OPTIONS ('format-version'='2'); ts = now(); -- Query the table at a specific timestamp SELECT * FROM spark_catalog.db.time_travel_example_3 TIMESTAMP AS OF ts; -- Finises with error: Cannot find a snapshot older than ts. ``` In Clickhouse the behavior is consistent with Spark. You can mentally replace Spark Select queries with Clickhouse Select queries and it will work the same way. Metadata File Resolution {#metadata-file-resolution} When using the iceberg table function in ClickHouse, the system needs to locate the correct metadata.json file that describes the Iceberg table structure. Here's how this resolution process works:
{"source_file": "iceberg.md"}
[ -0.011371978558599949, -0.03249680995941162, 0.007838263176381588, 0.05723050236701965, -0.0215463750064373, -0.014949816279113293, 0.014142016880214214, 0.015334793366491795, 0.005681650713086128, 0.003165967995300889, 0.07797451317310333, -0.06216679513454437, -0.035162072628736496, -0.0...
7f7df97c-3ee9-49b4-8ee5-14578e167870
Candidate Search (in Priority Order) {#candidate-search} Direct Path Specification : *If you set iceberg_metadata_file_path , the system will use this exact path by combining it with the Iceberg table directory path. When this setting is provided, all other resolution settings are ignored. Table UUID Matching : If iceberg_metadata_table_uuid is specified, the system will: Look only at .metadata.json files in the metadata directory *Filter for files containing a table-uuid field matching your specified UUID (case-insensitive) Default Search : *If neither of the above settings are provided, all .metadata.json files in the metadata directory become candidates Selecting the Most Recent File {#most-recent-file} After identifying candidate files using the above rules, the system determines which one is the most recent: If iceberg_recent_metadata_file_by_last_updated_ms_field is enabled: The file with the largest last-updated-ms value is selected Otherwise: The file with the highest version number is selected (Version appears as V in filenames formatted as V.metadata.json or V-uuid.metadata.json ) Note : All mentioned settings are table function settings (not global or query-level settings) and must be specified as shown below: sql SELECT * FROM iceberg('s3://bucket/path/to/iceberg_table', SETTINGS iceberg_metadata_table_uuid = 'a90eed4c-f74b-4e5b-b630-096fb9d09021'); Note : While Iceberg Catalogs typically handle metadata resolution, the iceberg table function in ClickHouse directly interprets files stored in S3 as Iceberg tables, which is why understanding these resolution rules is important. Metadata cache {#metadata-cache} Iceberg table engine and table function support metadata cache storing the information of manifest files, manifest list and metadata json. The cache is stored in memory. This feature is controlled by setting use_iceberg_metadata_files_cache , which is enabled by default. Aliases {#aliases} Table function iceberg is an alias to icebergS3 now. Virtual Columns {#virtual-columns} _path β€” Path to the file. Type: LowCardinality(String) . _file β€” Name of the file. Type: LowCardinality(String) . _size β€” Size of the file in bytes. Type: Nullable(UInt64) . If the file size is unknown, the value is NULL . _time β€” Last modified time of the file. Type: Nullable(DateTime) . If the time is unknown, the value is NULL . _etag β€” The etag of the file. Type: LowCardinality(String) . If the etag is unknown, the value is NULL . Writes into iceberg table {#writes-into-iceberg-table} Starting from version 25.7, ClickHouse supports modifications of user’s Iceberg tables. Currently, this is an experimental feature, so you first need to enable it: sql SET allow_experimental_insert_into_iceberg = 1; Creating table {#create-iceberg-table}
{"source_file": "iceberg.md"}
[ -0.0044968924485147, 0.028077874332666397, -0.03888966888189316, -0.033401377499103546, 0.11688360571861267, -0.009018178097903728, -0.0527680404484272, 0.09627314656972885, -0.008515452966094017, 0.0201619490981102, 0.029772348701953888, 0.06922295689582825, -0.007392675615847111, 0.00311...
9fb2163c-69ea-44d1-81a5-cf962488c79e
Currently, this is an experimental feature, so you first need to enable it: sql SET allow_experimental_insert_into_iceberg = 1; Creating table {#create-iceberg-table} To create your own empty Iceberg table, use the same commands as for reading, but specify the schema explicitly. Writes supports all data formats from iceberg specification, such as Parquet, Avro, ORC. Example {#example-iceberg-writes-create} sql CREATE TABLE iceberg_writes_example ( x Nullable(String), y Nullable(Int32) ) ENGINE = IcebergLocal('/home/scanhex12/iceberg_example/') Note: To create a version hint file, enable the iceberg_use_version_hint setting. If you want to compress the metadata.json file, specify the codec name in the iceberg_metadata_compression_method setting. INSERT {#writes-inserts} After creating a new table, you can insert data using the usual ClickHouse syntax. Example {#example-iceberg-writes-insert} ```sql INSERT INTO iceberg_writes_example VALUES ('Pavel', 777), ('Ivanov', 993); SELECT * FROM iceberg_writes_example FORMAT VERTICAL; Row 1: ────── x: Pavel y: 777 Row 2: ────── x: Ivanov y: 993 ``` DELETE {#iceberg-writes-delete} Deleting extra rows in the merge-on-read format is also supported in ClickHouse. This query will create a new snapshot with position delete files. NOTE: If you want to read your tables in the future with other Iceberg engines (such as Spark), you need to disable the settings output_format_parquet_use_custom_encoder and output_format_parquet_parallel_encoding . This is because Spark reads these files by parquet field-ids, while ClickHouse does not currently support writing field-ids when these flags are enabled. We plan to fix this behavior in the future. Example {#example-iceberg-writes-delete} ```sql ALTER TABLE iceberg_writes_example DELETE WHERE x != 'Ivanov'; SELECT * FROM iceberg_writes_example FORMAT VERTICAL; Row 1: ────── x: Ivanov y: 993 ``` Schema evolution {#iceberg-writes-schema-evolution} ClickHouse allows you to add, drop, or modify columns with simple types (non-tuple, non-array, non-map). Example {#example-iceberg-writes-evolution} ```sql ALTER TABLE iceberg_writes_example MODIFY COLUMN y Nullable(Int64); SHOW CREATE TABLE iceberg_writes_example; β”Œβ”€statement─────────────────────────────────────────────────┐ 1. β”‚ CREATE TABLE default.iceberg_writes_example ↴│ │↳( ↴│ │↳ x Nullable(String), ↴│ │↳ y Nullable(Int64) ↴│ │↳) ↴│ │↳ENGINE = IcebergLocal('/home/scanhex12/iceberg_example/') β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ALTER TABLE iceberg_writes_example ADD COLUMN z Nullable(Int32); SHOW CREATE TABLE iceberg_writes_example;
{"source_file": "iceberg.md"}
[ -0.0299517959356308, -0.029978185892105103, -0.08975595235824585, 0.0663943812251091, 0.0016696210950613022, -0.029865875840187073, -0.07547922432422638, 0.11470421403646469, -0.06575683504343033, 0.05155317485332489, 0.00291540683247149, -0.011793662793934345, 0.017887696623802185, -0.070...
a4009697-2673-41fa-abbb-895d625f8fee
ALTER TABLE iceberg_writes_example ADD COLUMN z Nullable(Int32); SHOW CREATE TABLE iceberg_writes_example; β”Œβ”€statement─────────────────────────────────────────────────┐ 1. β”‚ CREATE TABLE default.iceberg_writes_example ↴│ │↳( ↴│ │↳ x Nullable(String), ↴│ │↳ y Nullable(Int64), ↴│ │↳ z Nullable(Int32) ↴│ │↳) ↴│ │↳ENGINE = IcebergLocal('/home/scanhex12/iceberg_example/') β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ SELECT * FROM iceberg_writes_example FORMAT VERTICAL; Row 1: ────── x: Ivanov y: 993 z: ᴺᡁᴸᴸ ALTER TABLE iceberg_writes_example DROP COLUMN z; SHOW CREATE TABLE iceberg_writes_example; β”Œβ”€statement─────────────────────────────────────────────────┐ 1. β”‚ CREATE TABLE default.iceberg_writes_example ↴│ │↳( ↴│ │↳ x Nullable(String), ↴│ │↳ y Nullable(Int64) ↴│ │↳) ↴│ │↳ENGINE = IcebergLocal('/home/scanhex12/iceberg_example/') β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ SELECT * FROM iceberg_writes_example FORMAT VERTICAL; Row 1: ────── x: Ivanov y: 993 ``` Compaction {#iceberg-writes-compaction} ClickHouse supports compaction iceberg table. Currently, it can merge position delete files into data files while updating metadata. Previous snapshot IDs and timestamps remain unchanged, so the time-travel feature can still be used with the same values. How to use it: ```sql SET allow_experimental_iceberg_compaction = 1 OPTIMIZE TABLE iceberg_writes_example; SELECT * FROM iceberg_writes_example FORMAT VERTICAL; Row 1: ────── x: Ivanov y: 993 ``` Table with catalogs {#iceberg-writes-catalogs} All the write features described above are also available with REST and Glue catalogs. To use them, create a table with the IcebergS3 engine and provide the necessary settings: sql CREATE TABLE `database_name.table_name` ENGINE = IcebergS3('http://minio:9000/warehouse-rest/table_name/', 'minio_access_key', 'minio_secret_key') SETTINGS storage_catalog_type="rest", storage_warehouse="demo", object_storage_endpoint="http://minio:9000/warehouse-rest", storage_region="us-east-1", storage_catalog_url="http://rest:8181/v1", See Also {#see-also} Iceberg engine Iceberg cluster table function
{"source_file": "iceberg.md"}
[ -0.011654221452772617, 0.004916760604828596, -0.07663170248270035, 0.052495189011096954, 0.010663998313248158, -0.005617707502096891, -0.019373221322894096, 0.08771006017923355, -0.05188111588358879, 0.028842009603977203, -0.009362217038869858, 0.010453457944095135, 0.027245191857218742, -...
210e9c99-eb10-4b7b-a6a3-18254b11106b
description: 'The executable table function creates a table based on the output of a user-defined function (UDF) that you define in a script that outputs rows to stdout .' keywords: ['udf', 'user defined function', 'clickhouse', 'executable', 'table', 'function'] sidebar_label: 'executable' sidebar_position: 50 slug: /engines/table-functions/executable title: 'executable' doc_type: 'reference' executable Table Function for UDFs The executable table function creates a table based on the output of a user-defined function (UDF) that you define in a script that outputs rows to stdout . The executable script is stored in the users_scripts directory and can read data from any source. Make sure your ClickHouse server has all the required packages to run the executable script. For example, if it is a Python script, ensure that the server has the necessary Python packages installed. You can optionally include one or more input queries that stream their results to stdin for the script to read. :::note A key advantage between ordinary UDF functions and the executable table function and Executable table engine is that ordinary UDF functions cannot change the row count. For example, if the input is 100 rows, then the result must return 100 rows. When using the executable table function or Executable table engine, your script can make any data transformations you want, including complex aggregations. ::: Syntax {#syntax} The executable table function requires three parameters and accepts an optional list of input queries: sql executable(script_name, format, structure, [input_query...] [,SETTINGS ...]) script_name : the file name of the script. saved in the user_scripts folder (the default folder of the user_scripts_path setting) format : the format of the generated table structure : the table schema of the generated table input_query : an optional query (or collection or queries) whose results are passed to the script via stdin :::note If you are going to invoke the same script repeatedly with the same input queries, consider using the Executable table engine . ::: The following Python script is named generate_random.py and is saved in the user_scripts folder. It reads in a number i and prints i random strings, with each string preceded by a number that is separated by a tab: ```python !/usr/local/bin/python3.9 import sys import string import random def main(): # Read input value for number in sys.stdin: i = int(number) # Generate some random rows for id in range(0, i): letters = string.ascii_letters random_string = ''.join(random.choices(letters ,k=10)) print(str(id) + '\t' + random_string + '\n', end='') # Flush results to stdout sys.stdout.flush() if name == " main ": main() ``` Let's invoke the script and have it generate 10 random strings:
{"source_file": "executable.md"}
[ 0.024635473266243935, -0.098733089864254, -0.10450389236211777, -0.028202751651406288, -0.029013298451900482, -0.09946094453334808, -0.009184313006699085, 0.05169264227151871, -0.03794446215033531, 0.03735097870230675, 0.047402385622262955, -0.03181023895740509, 0.03909768909215927, -0.057...
b5a71bae-8ca4-4b9b-a7ab-7bcecd26fcba
# Flush results to stdout sys.stdout.flush() if name == " main ": main() ``` Let's invoke the script and have it generate 10 random strings: sql SELECT * FROM executable('generate_random.py', TabSeparated, 'id UInt32, random String', (SELECT 10)) The response looks like: response β”Œβ”€id─┬─random─────┐ β”‚ 0 β”‚ xheXXCiSkH β”‚ β”‚ 1 β”‚ AqxvHAoTrl β”‚ β”‚ 2 β”‚ JYvPCEbIkY β”‚ β”‚ 3 β”‚ sWgnqJwGRm β”‚ β”‚ 4 β”‚ fTZGrjcLon β”‚ β”‚ 5 β”‚ ZQINGktPnd β”‚ β”‚ 6 β”‚ YFSvGGoezb β”‚ β”‚ 7 β”‚ QyMJJZOOia β”‚ β”‚ 8 β”‚ NfiyDDhmcI β”‚ β”‚ 9 β”‚ REJRdJpWrg β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Settings {#settings} send_chunk_header - controls whether to send row count before sending a chunk of data to process. Default value is false . pool_size β€” Size of pool. If 0 is specified as pool_size then there is no pool size restrictions. Default value is 16 . max_command_execution_time β€” Maximum executable script command execution time for processing block of data. Specified in seconds. Default value is 10. command_termination_timeout β€” executable script should contain main read-write loop. After table function is destroyed, pipe is closed, and executable file will have command_termination_timeout seconds to shutdown, before ClickHouse will send SIGTERM signal to child process. Specified in seconds. Default value is 10. command_read_timeout - timeout for reading data from command stdout in milliseconds. Default value 10000. command_write_timeout - timeout for writing data to command stdin in milliseconds. Default value 10000. Passing Query Results to a Script {#passing-query-results-to-a-script} Be sure to check out the example in the Executable table engine on how to pass query results to a script . Here is how you execute the same script in that example using the executable table function: sql SELECT * FROM executable( 'sentiment.py', TabSeparated, 'id UInt64, sentiment Float32', (SELECT id, comment FROM hackernews WHERE id > 0 AND comment != '' LIMIT 20) );
{"source_file": "executable.md"}
[ -0.029161518439650536, -0.014711327850818634, -0.0434257872402668, 0.06832964718341827, -0.04854104667901993, -0.08469676226377487, 0.07826319336891174, -0.02346963621675968, -0.03685396537184715, 0.03492943197488785, 0.005445234011858702, -0.025376221165060997, 0.06742870807647705, -0.109...
4beeba93-1be6-405e-b0b2-14e8319a4e43
description: 'timeSeriesMetrics returns the metrics table used by table db_name.time_series_table whose table engine is the TimeSeries engine.' sidebar_label: 'timeSeriesMetrics' sidebar_position: 145 slug: /sql-reference/table-functions/timeSeriesMetrics title: 'timeSeriesMetrics' doc_type: 'reference' timeSeriesMetrics Table Function timeSeriesMetrics(db_name.time_series_table) - Returns the metrics table used by table db_name.time_series_table whose table engine is the TimeSeries engine: sql CREATE TABLE db_name.time_series_table ENGINE=TimeSeries METRICS metrics_table The function also works if the metrics table is inner: sql CREATE TABLE db_name.time_series_table ENGINE=TimeSeries METRICS INNER UUID '01234567-89ab-cdef-0123-456789abcdef' The following queries are equivalent: sql SELECT * FROM timeSeriesMetrics(db_name.time_series_table); SELECT * FROM timeSeriesMetrics('db_name.time_series_table'); SELECT * FROM timeSeriesMetrics('db_name', 'time_series_table');
{"source_file": "timeSeriesMetrics.md"}
[ -0.05707875266671181, -0.050546228885650635, -0.06674615293741226, 0.01149621233344078, -0.06155172735452652, -0.08263611048460007, 0.054513752460479736, 0.06326215714216232, 0.025248682126402855, -0.03810080140829086, -0.007739221677184105, -0.11106224358081818, 0.014987689442932606, -0.0...
0ab8f4b5-de22-4655-97f3-60198f41d559
description: 'The loop table function in ClickHouse is used to return query results in an infinite loop.' slug: /sql-reference/table-functions/loop title: 'loop' doc_type: 'reference' loop Table Function Syntax {#syntax} sql SELECT ... FROM loop(database, table); SELECT ... FROM loop(database.table); SELECT ... FROM loop(table); SELECT ... FROM loop(other_table_function(...)); Arguments {#arguments} | Argument | Description | |-----------------------------|----------------------------------------------------------------------------------------------------------------------| | database | database name. | | table | table name. | | other_table_function(...) | other table function. Example: SELECT * FROM loop(numbers(10)); other_table_function(...) here is numbers(10) . | Returned values {#returned_values} Infinite loop to return query results. Examples {#examples} Selecting data from ClickHouse: sql SELECT * FROM loop(test_database, test_table); SELECT * FROM loop(test_database.test_table); SELECT * FROM loop(test_table); Or using other table functions: sql SELECT * FROM loop(numbers(3)) LIMIT 7; β”Œβ”€number─┐ 1. β”‚ 0 β”‚ 2. β”‚ 1 β”‚ 3. β”‚ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€number─┐ 4. β”‚ 0 β”‚ 5. β”‚ 1 β”‚ 6. β”‚ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€number─┐ 7. β”‚ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ sql SELECT * FROM loop(mysql('localhost:3306', 'test', 'test', 'user', 'password')); ...
{"source_file": "loop.md"}
[ 0.050893768668174744, -0.006742043420672417, -0.03824424743652344, 0.04015393555164337, -0.06181687116622925, -0.019828181713819504, 0.05026868358254433, 0.019325608387589455, -0.0036828021984547377, -0.0458475761115551, 0.03210269287228584, -0.021719856187701225, 0.025996072217822075, -0....
0c412f31-3e9b-44d9-9dba-544bfaae6902
description: 'Creates a table from the URL with given format and structure ' sidebar_label: 'url' sidebar_position: 200 slug: /sql-reference/table-functions/url title: 'url' doc_type: 'reference' import ExperimentalBadge from '@theme/badges/ExperimentalBadge'; import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge'; url Table Function url function creates a table from the URL with given format and structure . url function may be used in SELECT and INSERT queries on data in URL tables. Syntax {#syntax} sql url(URL [,format] [,structure] [,headers]) Parameters {#parameters} | Parameter | Description | |-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------| | URL | Single quoted HTTP or HTTPS server address, which can accept GET or POST requests (for SELECT or INSERT queries correspondingly). Type: String . | | format | Format of the data. Type: String . | | structure | Table structure in 'UserID UInt64, Name String' format. Determines column names and types. Type: String . | | headers | Headers in 'headers('key1'='value1', 'key2'='value2')' format. You can set headers for HTTP call. | Returned value {#returned_value} A table with the specified format and structure and with data from the defined URL . Examples {#examples} Getting the first 3 lines of a table that contains columns of String and UInt32 type from HTTP-server which answers in CSV format. sql SELECT * FROM url('http://127.0.0.1:12345/', CSV, 'column1 String, column2 UInt32', headers('Accept'='text/csv; charset=utf-8')) LIMIT 3; Inserting data from a URL into a table: sql CREATE TABLE test_table (column1 String, column2 UInt32) ENGINE=Memory; INSERT INTO FUNCTION url('http://127.0.0.1:8123/?query=INSERT+INTO+test_table+FORMAT+CSV', 'CSV', 'column1 String, column2 UInt32') VALUES ('http interface', 42); SELECT * FROM test_table; Globs in URL {#globs-in-url} Patterns in curly brackets { } are used to generate a set of shards or to specify failover addresses. Supported pattern types and examples see in the description of the remote function. Character | inside patterns is used to specify failover addresses. They are iterated in the same order as listed in the pattern. The number of generated addresses is limited by glob_expansion_max_elements setting. Virtual Columns {#virtual-columns} _path β€” Path to the URL . Type: LowCardinality(String) . _file β€” Resource name of the URL . Type: LowCardinality(String) .
{"source_file": "url.md"}
[ -0.0057968869805336, 0.017699090763926506, -0.028187034651637077, 0.0795660987496376, -0.059941306710243225, 0.029511790722608566, 0.030017845332622528, 0.013903062790632248, -0.005589196924120188, 0.04128839075565338, -0.00047872858704067767, -0.050344713032245636, 0.14109204709529877, -0...
0c08323d-e9f2-43fb-81a5-37f6811896d4
Virtual Columns {#virtual-columns} _path β€” Path to the URL . Type: LowCardinality(String) . _file β€” Resource name of the URL . Type: LowCardinality(String) . _size β€” Size of the resource in bytes. Type: Nullable(UInt64) . If the size is unknown, the value is NULL . _time β€” Last modified time of the file. Type: Nullable(DateTime) . If the time is unknown, the value is NULL . _headers - HTTP response headers. Type: Map(LowCardinality(String), LowCardinality(String)) . use_hive_partitioning setting {#hive-style-partitioning} When setting use_hive_partitioning is set to 1, ClickHouse will detect Hive-style partitioning in the path ( /name=value/ ) and will allow to use partition columns as virtual columns in the query. These virtual columns will have the same names as in the partitioned path, but starting with _ . Example Use virtual column, created with Hive-style partitioning sql SELECT * FROM url('http://data/path/date=*/country=*/code=*/*.parquet') WHERE _date > '2020-01-01' AND _country = 'Netherlands' AND _code = 42; Storage Settings {#storage-settings} engine_url_skip_empty_files - allows to skip empty files while reading. Disabled by default. enable_url_encoding - allows to enable/disable decoding/encoding path in uri. Enabled by default. Permissions {#permissions} url function requires CREATE TEMPORARY TABLE permission. As such - it'll not work for users with readonly = 1 setting. At least readonly = 2 is required. Related {#related} Virtual columns
{"source_file": "url.md"}
[ 0.04856688529253006, 0.0387871079146862, -0.041082579642534256, 0.020384622737765312, -0.015151585452258587, -0.03954877331852913, -0.004834293853491545, 0.02151532657444477, -0.053537577390670776, 0.0530683659017086, 0.011011097580194473, -0.06181224808096886, -0.04481985419988632, -0.013...
138aacfa-1468-415e-ab8d-8fd187fb6afd
description: 'Provides a read-only table-like interface to Apache Hudi tables in Amazon S3.' sidebar_label: 'hudi' sidebar_position: 85 slug: /sql-reference/table-functions/hudi title: 'hudi' doc_type: 'reference' hudi Table Function Provides a read-only table-like interface to Apache Hudi tables in Amazon S3. Syntax {#syntax} sql hudi(url [,aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression]) Arguments {#arguments}
{"source_file": "hudi.md"}
[ -0.029360095039010048, 0.0031873455736786127, -0.14018794894218445, 0.026636401191353798, 0.04225470498204231, -0.019614798948168755, 0.00893205963075161, -0.05153238773345947, -0.023584824055433273, 0.02366076223552227, 0.05620235949754715, 0.04293862730264664, 0.11238616704940796, -0.140...
18dac643-101f-4b10-8546-de7648fbd080
Syntax {#syntax} sql hudi(url [,aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression]) Arguments {#arguments} | Argument | Description | |----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | url | Bucket url with the path to an existing Hudi table in S3. | | aws_access_key_id , aws_secret_access_key | Long-term credentials for the AWS account user. You can use these to authenticate your requests. These parameters are optional. If credentials are not specified, they are used from the ClickHouse configuration. For more information see Using S3 for Data Storage . | | format | The format of the file. | | structure | Structure of the table. Format 'column1_name column1_type, column2_name column2_type, ...' . | | compression | Parameter is optional. Supported values: none , gzip/gz , brotli/br , xz/LZMA , zstd/zst . By default, compression will be autodetected by the file extension. |
{"source_file": "hudi.md"}
[ -0.01547252107411623, 0.08261675387620926, -0.1277632862329483, 0.03868080675601959, -0.023105183616280556, -0.05102929100394249, 0.04470669850707054, -0.029838182032108307, -0.0035001784563064575, 0.01035996899008751, 0.037577301263809204, -0.022458957508206367, 0.10584427416324615, -0.12...
2b585e79-3629-43cb-965a-46a894cdc88e
Returned value {#returned_value} A table with the specified structure for reading data in the specified Hudi table in S3. Virtual Columns {#virtual-columns} _path β€” Path to the file. Type: LowCardinality(String) . _file β€” Name of the file. Type: LowCardinality(String) . _size β€” Size of the file in bytes. Type: Nullable(UInt64) . If the file size is unknown, the value is NULL . _time β€” Last modified time of the file. Type: Nullable(DateTime) . If the time is unknown, the value is NULL . _etag β€” The etag of the file. Type: LowCardinality(String) . If the etag is unknown, the value is NULL . Related {#related} Hudi engine Hudi cluster table function
{"source_file": "hudi.md"}
[ 0.0013941816287115216, 0.03251335397362709, -0.14833927154541016, 0.052695561200380325, 0.08512714505195618, -0.0306391604244709, -0.008708957582712173, 0.024364333599805832, -0.009964879602193832, -0.016781289130449295, 0.11223675310611725, 0.000035484186810208485, 0.0009380385745316744, ...
fc448837-d9d1-48d1-9f75-56cae98ac7e3
description: 'Perturbs the given query string with random variations.' sidebar_label: 'fuzzQuery' sidebar_position: 75 slug: /sql-reference/table-functions/fuzzQuery title: 'fuzzQuery' doc_type: 'reference' fuzzQuery Table Function Perturbs the given query string with random variations. Syntax {#syntax} sql fuzzQuery(query[, max_query_length[, random_seed]]) Arguments {#arguments} | Argument | Description | |--------------------|-----------------------------------------------------------------------------| | query | (String) - The source query to perform the fuzzing on. | | max_query_length | (UInt64) - A maximum length the query can get during the fuzzing process. | | random_seed | (UInt64) - A random seed for producing stable results. | Returned value {#returned_value} A table object with a single column containing perturbed query strings. Usage Example {#usage-example} sql SELECT * FROM fuzzQuery('SELECT materialize(\'a\' AS key) GROUP BY key') LIMIT 2; response β”Œβ”€query──────────────────────────────────────────────────────────┐ 1. β”‚ SELECT 'a' AS key GROUP BY key β”‚ 2. β”‚ EXPLAIN PIPELINE compact = true SELECT 'a' AS key GROUP BY key β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
{"source_file": "fuzzQuery.md"}
[ 0.016791941598057747, 0.062240567058324814, -0.0502944178879261, 0.014486190862953663, -0.05366228520870209, -0.02298007346689701, 0.1070527583360672, 0.049558304250240326, -0.04590874910354614, -0.028913985937833786, -0.000792493752669543, -0.04258622229099274, 0.10830886662006378, -0.110...
ca284c74-7d7c-41de-a643-a76a5390cf0d
description: 'Allows accessing all shards (configured in the remote_servers section) of a cluster without creating a Distributed table.' sidebar_label: 'cluster' sidebar_position: 30 slug: /sql-reference/table-functions/cluster title: 'clusterAllReplicas' doc_type: 'reference' clusterAllReplicas Table Function Allows accessing all shards (configured in the remote_servers section) of a cluster without creating a Distributed table. Only one replica of each shard is queried. clusterAllReplicas function β€” same as cluster , but all replicas are queried. Each replica in a cluster is used as a separate shard/connection. :::note All available clusters are listed in the system.clusters table. ::: Syntax {#syntax} sql cluster(['cluster_name', db.table, sharding_key]) cluster(['cluster_name', db, table, sharding_key]) clusterAllReplicas(['cluster_name', db.table, sharding_key]) clusterAllReplicas(['cluster_name', db, table, sharding_key]) Arguments {#arguments} | Arguments | Type | |-----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------| | cluster_name | Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers, set default if not specified. | | db.table or db , table | Name of a database and a table. | | sharding_key | A sharding key. Optional. Needs to be specified if the cluster has more than one shard. | Returned value {#returned_value} The dataset from clusters. Using macros {#using_macros} cluster_name can contain macros β€” substitution in curly brackets. The substituted value is taken from the macros section of the server configuration file. Example: sql SELECT * FROM cluster('{cluster}', default.example_table); Usage and recommendations {#usage_recommendations} Using the cluster and clusterAllReplicas table functions are less efficient than creating a Distributed table because in this case, the server connection is re-established for every request. When processing a large number of queries, please always create the Distributed table ahead of time, and do not use the cluster and clusterAllReplicas table functions. The cluster and clusterAllReplicas table functions can be useful in the following cases: Accessing a specific cluster for data comparison, debugging, and testing. Queries to various ClickHouse clusters and replicas for research purposes. Infrequent distributed requests that are made manually.
{"source_file": "cluster.md"}
[ 0.07210429012775421, -0.05968311056494713, -0.023958874866366386, 0.04998166114091873, -0.001926916535012424, -0.0063305688090622425, -0.044696174561977386, -0.05761769041419029, 0.05101506784558296, 0.039956916123628616, 0.029715800657868385, 0.0069182561710476875, 0.06719114631414413, -0...
af305c61-e97b-4804-a243-5c9e433cbcd9
Queries to various ClickHouse clusters and replicas for research purposes. Infrequent distributed requests that are made manually. Connection settings like host , port , user , password , compression , secure are taken from <remote_servers> config section. See details in Distributed engine . Related {#related} skip_unavailable_shards load_balancing
{"source_file": "cluster.md"}
[ 0.004969716537743807, -0.029993025586009026, -0.03997179865837097, 0.03441182151436806, -0.022001443430781364, -0.08031415939331055, -0.07833220809698105, -0.05390922352671623, 0.0062488229013979435, 0.0220296960324049, -0.02101055346429348, 0.047467272728681564, 0.028258970007300377, -0.0...
b69e96b2-9a4c-4cb9-842e-eefa37ec6bac
description: 'Allows processing files from URL in parallel from many nodes in a specified cluster.' sidebar_label: 'urlCluster' sidebar_position: 201 slug: /sql-reference/table-functions/urlCluster title: 'urlCluster' doc_type: 'reference' urlCluster Table Function Allows processing files from URL in parallel from many nodes in a specified cluster. On initiator it creates a connection to all nodes in the cluster, discloses asterisk in URL file path, and dispatches each file dynamically. On the worker node it asks the initiator about the next task to process and processes it. This is repeated until all tasks are finished. Syntax {#syntax} sql urlCluster(cluster_name, URL, format, structure) Arguments {#arguments} | Argument | Description | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------| | cluster_name | Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers. | | URL | HTTP or HTTPS server address, which can accept GET requests. Type: String . | | format | Format of the data. Type: String . | | structure | Table structure in 'UserID UInt64, Name String' format. Determines column names and types. Type: String . | Returned value {#returned_value} A table with the specified format and structure and with data from the defined URL . Examples {#examples} Getting the first 3 lines of a table that contains columns of String and UInt32 type from HTTP-server which answers in CSV format. Create a basic HTTP server using the standard Python 3 tools and start it: ```python from http.server import BaseHTTPRequestHandler, HTTPServer class CSVHTTPServer(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header('Content-type', 'text/csv') self.end_headers() self.wfile.write(bytes('Hello,1\nWorld,2\n', "utf-8")) if name == " main ": server_address = ('127.0.0.1', 12345) HTTPServer(server_address, CSVHTTPServer).serve_forever() ``` sql SELECT * FROM urlCluster('cluster_simple','http://127.0.0.1:12345', CSV, 'column1 String, column2 UInt32') Globs in URL {#globs-in-url} Patterns in curly brackets { } are used to generate a set of shards or to specify failover addresses. Supported pattern types and examples see in the description of the remote function. Character | inside patterns is used to specify failover addresses. They are iterated in the same order as listed in the pattern. The number of generated addresses is limited by glob_expansion_max_elements setting.
{"source_file": "urlCluster.md"}
[ -0.05420665815472603, -0.03187050670385361, -0.09904330968856812, 0.08632002025842667, -0.092234767973423, -0.06372369825839996, -0.01620331220328808, 0.008959961123764515, -0.020072126761078835, 0.029082773253321648, -0.017190633341670036, -0.01456025056540966, 0.033718932420015335, -0.09...
5d1e145f-cc48-4e50-9adb-49ba07f760b1
Related {#related} HDFS engine URL table function
{"source_file": "urlCluster.md"}
[ 0.009525408037006855, -0.04839875176548958, -0.029927508905529976, -0.015475749969482422, 0.012817359529435635, 0.025493064895272255, -0.06833542138338089, -0.01666877605021, -0.04709484800696373, -0.05031917989253998, -0.0018095048144459724, -0.006148979999125004, 0.07559599727392197, -0....
758e98a4-fc02-4e4a-973a-75abd7bc0137
description: 'This table function allows integrating ClickHouse with Redis.' sidebar_label: 'redis' sidebar_position: 170 slug: /sql-reference/table-functions/redis title: 'redis' doc_type: 'reference' redis Table Function This table function allows integrating ClickHouse with Redis . Syntax {#syntax} sql redis(host:port, key, structure[, db_index[, password[, pool_size]]]) Arguments {#arguments} | Argument | Description | |-------------|------------------------------------------------------------------------------------------------------------| | host:port | Redis server address, you can ignore port and default Redis port 6379 will be used. | | key | any column name in the column list. | | structure | The schema for the ClickHouse table returned from this function. | | db_index | Redis db index range from 0 to 15, default is 0. | | password | User password, default is blank string. | | pool_size | Redis max connection pool size, default is 16. | | primary | must be specified, it supports only one column in the primary key. The primary key will be serialized in binary as a Redis key. | columns other than the primary key will be serialized in binary as Redis value in corresponding order. queries with key equals or in filtering will be optimized to multi keys lookup from Redis. If queries without filtering key full table scan will happen which is a heavy operation. Named collections are not supported for redis table function at the moment. Returned value {#returned_value} A table object with key as Redis key, other columns packaged together as Redis value. Usage Example {#usage-example} Read from Redis: sql SELECT * FROM redis( 'redis1:6379', 'key', 'key String, v1 String, v2 UInt32' ) Insert into Redis: sql INSERT INTO TABLE FUNCTION redis( 'redis1:6379', 'key', 'key String, v1 String, v2 UInt32') values ('1', '1', 1); Related {#related} The Redis table engine Using redis as a dictionary source
{"source_file": "redis.md"}
[ 0.04102666303515434, -0.04989808425307274, -0.12823797762393951, 0.023082716390490532, -0.07576345652341843, -0.03863900527358055, 0.04440296068787575, -0.006828997749835253, -0.03658704832196236, 0.008676175959408283, 0.024865148589015007, -0.03576233983039856, 0.0799943283200264, -0.0684...
958e0d85-edd1-4f79-9245-29c6403c930b
description: 'An extension to the iceberg table function which allows processing files from Apache Iceberg in parallel from many nodes in a specified cluster.' sidebar_label: 'icebergCluster' sidebar_position: 91 slug: /sql-reference/table-functions/icebergCluster title: 'icebergCluster' doc_type: 'reference' icebergCluster Table Function This is an extension to the iceberg table function. Allows processing files from Apache Iceberg in parallel from many nodes in a specified cluster. On initiator it creates a connection to all nodes in the cluster and dispatches each file dynamically. On the worker node it asks the initiator about the next task to process and processes it. This is repeated until all tasks are finished. Syntax {#syntax} ```sql icebergS3Cluster(cluster_name, url [, NOSIGN | access_key_id, secret_access_key, [session_token]] [,format] [,compression_method]) icebergS3Cluster(cluster_name, named_collection[, option=value [,..]]) icebergAzureCluster(cluster_name, connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method]) icebergAzureCluster(cluster_name, named_collection[, option=value [,..]]) icebergHDFSCluster(cluster_name, path_to_table, [,format] [,compression_method]) icebergHDFSCluster(cluster_name, named_collection[, option=value [,..]]) ``` Arguments {#arguments} cluster_name β€” Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers. Description of all other arguments coincides with description of arguments in equivalent iceberg table function. Returned value A table with the specified structure for reading data from cluster in the specified Iceberg table. Examples sql SELECT * FROM icebergS3Cluster('cluster_simple', 'http://test.s3.amazonaws.com/clickhouse-bucket/test_table', 'test', 'test') Virtual Columns {#virtual-columns} _path β€” Path to the file. Type: LowCardinality(String) . _file β€” Name of the file. Type: LowCardinality(String) . _size β€” Size of the file in bytes. Type: Nullable(UInt64) . If the file size is unknown, the value is NULL . _time β€” Last modified time of the file. Type: Nullable(DateTime) . If the time is unknown, the value is NULL . _etag β€” The etag of the file. Type: LowCardinality(String) . If the etag is unknown, the value is NULL . See Also Iceberg engine Iceberg table function
{"source_file": "icebergCluster.md"}
[ -0.08024230599403381, -0.04225502535700798, -0.10302162915468216, 0.07824359089136124, 0.02497975341975689, -0.07460856437683105, -0.01633690670132637, 0.05525987595319748, -0.038718461990356445, 0.01129123754799366, 0.007927494123578072, -0.0005107562174089253, 0.03624868392944336, -0.113...
a95b7d30-0c91-4431-ba72-c9101477d43d
description: 'Turns a subquery into a table. The function implements views.' sidebar_label: 'view' sidebar_position: 210 slug: /sql-reference/table-functions/view title: 'view' doc_type: 'reference' view Table Function Turns a subquery into a table. The function implements views (see CREATE VIEW ). The resulting table does not store data, but only stores the specified SELECT query. When reading from the table, ClickHouse executes the query and deletes all unnecessary columns from the result. Syntax {#syntax} sql view(subquery) Arguments {#arguments} subquery β€” SELECT query. Returned value {#returned_value} A table. Examples {#examples} Input table: text β”Œβ”€id─┬─name─────┬─days─┐ β”‚ 1 β”‚ January β”‚ 31 β”‚ β”‚ 2 β”‚ February β”‚ 29 β”‚ β”‚ 3 β”‚ March β”‚ 31 β”‚ β”‚ 4 β”‚ April β”‚ 30 β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ Query: sql SELECT * FROM view(SELECT name FROM months); Result: text β”Œβ”€name─────┐ β”‚ January β”‚ β”‚ February β”‚ β”‚ March β”‚ β”‚ April β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ You can use the view function as a parameter of the remote and cluster table functions: sql SELECT * FROM remote(`127.0.0.1`, view(SELECT a, b, c FROM table_name)); sql SELECT * FROM cluster(`cluster_name`, view(SELECT a, b, c FROM table_name)); Related {#related} View Table Engine
{"source_file": "view.md"}
[ 0.02310967817902565, -0.07863199710845947, -0.056003257632255554, 0.10374192893505096, -0.014743873849511147, -0.012940242886543274, -0.018179984763264656, 0.013464435003697872, -0.016176633536815643, 0.010471933521330357, 0.029475798830389977, -0.02656206488609314, 0.06999225914478302, -0...
1f9409f7-44b8-41c9-9be2-b7efadf717fa
description: 'A table engine which provides a table-like interface to SELECT from and INSERT into files, similar to the s3 table function. Use file() when working with local files, and s3() when working with buckets in object storage such as S3, GCS, or MinIO.' sidebar_label: 'file' sidebar_position: 60 slug: /sql-reference/table-functions/file title: 'file' doc_type: 'reference' import ExperimentalBadge from '@theme/badges/ExperimentalBadge'; import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge'; file Table Function A table engine which provides a table-like interface to SELECT from and INSERT into files, similar to the s3 table function. Use file() when working with local files, and s3() when working with buckets in object storage such as S3, GCS, or MinIO. The file function can be used in SELECT and INSERT queries to read from or write to files. Syntax {#syntax} sql file([path_to_archive ::] path [,format] [,structure] [,compression]) Arguments {#arguments}
{"source_file": "file.md"}
[ -0.037283334881067276, 0.0013774350518360734, -0.08767087012529373, 0.06253436952829361, 0.06290590763092041, 0.012398377992212772, 0.053694628179073334, 0.060482610017061234, 0.007864998653531075, 0.09060854464769363, 0.0234269630163908, 0.03467770293354988, 0.09101200103759766, -0.069350...
486de785-062f-476a-b6e3-6a908859dc9c
Syntax {#syntax} sql file([path_to_archive ::] path [,format] [,structure] [,compression]) Arguments {#arguments} | Parameter | Description | |-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | path | The relative path to the file from user_files_path . Supports in read-only mode the following globs : * , ? , {abc,def} (with 'abc' and 'def' being strings) and {N..M} (with N and M being numbers). | | path_to_archive | The relative path to a zip/tar/7z archive. Supports the same globs as path . | | format | The format of the file. | | structure | Structure of the table. Format: 'column1_name column1_type, column2_name column2_type, ...' . | | compression | The existing compression type when used in a SELECT query, or the desired compression type when used in an INSERT query. Supported compression types are gz , br , xz , zst , lz4 , and bz2 . | Returned value {#returned_value} A table for reading or writing data in a file. Examples for Writing to a File {#examples-for-writing-to-a-file} Write to a TSV file {#write-to-a-tsv-file} sql INSERT INTO TABLE FUNCTION file('test.tsv', 'TSV', 'column1 UInt32, column2 UInt32, column3 UInt32') VALUES (1, 2, 3), (3, 2, 1), (1, 3, 2) As a result, the data is written into the file test.tsv : ```bash cat /var/lib/clickhouse/user_files/test.tsv 1 2 3 3 2 1 1 3 2 ``` Partitioned write to multiple TSV files {#partitioned-write-to-multiple-tsv-files}
{"source_file": "file.md"}
[ 0.010162239894270897, 0.055552151054143906, -0.12164152413606644, 0.05587872490286827, -0.038644157350063324, -0.006671028211712837, 0.04785730689764023, 0.06522426009178162, -0.06465504318475723, 0.05772525072097778, -0.044386062771081924, -0.0037546944804489613, 0.058528680354356766, -0....
2f03739f-84fc-48f5-ae74-8a0b4f569fb9
```bash cat /var/lib/clickhouse/user_files/test.tsv 1 2 3 3 2 1 1 3 2 ``` Partitioned write to multiple TSV files {#partitioned-write-to-multiple-tsv-files} If you specify a PARTITION BY expression when inserting data into a table function of type file() , then a separate file is created for each partition. Splitting the data into separate files helps to improve performance of read operations. sql INSERT INTO TABLE FUNCTION file('test_{_partition_id}.tsv', 'TSV', 'column1 UInt32, column2 UInt32, column3 UInt32') PARTITION BY column3 VALUES (1, 2, 3), (3, 2, 1), (1, 3, 2) As a result, the data is written into three files: test_1.tsv , test_2.tsv , and test_3.tsv . ```bash cat /var/lib/clickhouse/user_files/test_1.tsv 3 2 1 cat /var/lib/clickhouse/user_files/test_2.tsv 1 3 2 cat /var/lib/clickhouse/user_files/test_3.tsv 1 2 3 ``` Examples for Reading from a File {#examples-for-reading-from-a-file} SELECT from a CSV file {#select-from-a-csv-file} First, set user_files_path in the server configuration and prepare a file test.csv : ```bash $ grep user_files_path /etc/clickhouse-server/config.xml /var/lib/clickhouse/user_files/ $ cat /var/lib/clickhouse/user_files/test.csv 1,2,3 3,2,1 78,43,45 ``` Then, read data from test.csv into a table and select its first two rows: sql SELECT * FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 2; text β”Œβ”€column1─┬─column2─┬─column3─┐ β”‚ 1 β”‚ 2 β”‚ 3 β”‚ β”‚ 3 β”‚ 2 β”‚ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Inserting data from a file into a table {#inserting-data-from-a-file-into-a-table} sql INSERT INTO FUNCTION file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') VALUES (1, 2, 3), (3, 2, 1); sql SELECT * FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32'); text β”Œβ”€column1─┬─column2─┬─column3─┐ β”‚ 1 β”‚ 2 β”‚ 3 β”‚ β”‚ 3 β”‚ 2 β”‚ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Reading data from table.csv , located in archive1.zip or/and archive2.zip : sql SELECT * FROM file('user_files/archives/archive{1..2}.zip :: table.csv'); Globs in path {#globs-in-path} Paths may use globbing. Files must match the whole path pattern, not only the suffix or prefix. There is one exception that if the path refers to an existing directory and does not use globs, a * will be implicitly added to the path so all the files in the directory are selected. * β€” Represents arbitrarily many characters except / but including the empty string. ? β€” Represents an arbitrary single character. {some_string,another_string,yet_another_one} β€” Substitutes any of strings 'some_string', 'another_string', 'yet_another_one' . The strings can contain the / symbol. {N..M} β€” Represents any number >= N and <= M . ** - Represents all files inside a folder recursively.
{"source_file": "file.md"}
[ -0.010162611491978168, -0.07765756547451019, -0.030446475371718407, 0.015111533924937248, -0.02446524240076542, -0.05953551456332207, 0.08784044533967972, 0.07892598956823349, -0.023096442222595215, 0.08335868269205093, 0.005806826055049896, 0.030367881059646606, 0.01223512552678585, 0.003...
8fafe7b7-0f4d-47d5-af34-480f07d21eb1
{N..M} β€” Represents any number >= N and <= M . ** - Represents all files inside a folder recursively. Constructions with {} are similar to the remote and hdfs table functions. Examples {#examples} Example Suppose there are these files with the following relative paths: some_dir/some_file_1 some_dir/some_file_2 some_dir/some_file_3 another_dir/some_file_1 another_dir/some_file_2 another_dir/some_file_3 Query the total number of rows in all files: sql SELECT count(*) FROM file('{some,another}_dir/some_file_{1..3}', 'TSV', 'name String, value UInt32'); An alternative path expression which achieves the same: sql SELECT count(*) FROM file('{some,another}_dir/*', 'TSV', 'name String, value UInt32'); Query the total number of rows in some_dir using the implicit * : sql SELECT count(*) FROM file('some_dir', 'TSV', 'name String, value UInt32'); :::note If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use ? . ::: Example Query the total number of rows in files named file000 , file001 , ... , file999 : sql SELECT count(*) FROM file('big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name String, value UInt32'); Example Query the total number of rows from all files inside directory big_dir/ recursively: sql SELECT count(*) FROM file('big_dir/**', 'CSV', 'name String, value UInt32'); Example Query the total number of rows from all files file002 inside any folder in directory big_dir/ recursively: sql SELECT count(*) FROM file('big_dir/**/file002', 'CSV', 'name String, value UInt32'); Virtual Columns {#virtual-columns} _path β€” Path to the file. Type: LowCardinality(String) . _file β€” Name of the file. Type: LowCardinality(String) . _size β€” Size of the file in bytes. Type: Nullable(UInt64) . If the file size is unknown, the value is NULL . _time β€” Last modified time of the file. Type: Nullable(DateTime) . If the time is unknown, the value is NULL . use_hive_partitioning setting {#hive-style-partitioning} When setting use_hive_partitioning is set to 1, ClickHouse will detect Hive-style partitioning in the path ( /name=value/ ) and will allow to use partition columns as virtual columns in the query. These virtual columns will have the same names as in the partitioned path, but starting with _ . Example Use virtual column, created with Hive-style partitioning sql SELECT * FROM file('data/path/date=*/country=*/code=*/*.parquet') WHERE _date > '2020-01-01' AND _country = 'Netherlands' AND _code = 42; Settings {#settings}
{"source_file": "file.md"}
[ -0.01488120760768652, -0.03848673775792122, -0.03891875967383385, 0.08838586509227753, -0.036940425634384155, -0.006256040185689926, 0.06514086574316025, 0.08740272372961044, 0.05789850279688835, 0.029839513823390007, 0.039545100182294846, -0.002162511460483074, 0.09831070154905319, -0.023...
a70589ee-3dc9-46d1-a017-5eb6ee44f4e4
sql SELECT * FROM file('data/path/date=*/country=*/code=*/*.parquet') WHERE _date > '2020-01-01' AND _country = 'Netherlands' AND _code = 42; Settings {#settings} | Setting | Description | |--------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | engine_file_empty_if_not_exists | allows to select empty data from a file that doesn't exist. Disabled by default. | | engine_file_truncate_on_insert | allows to truncate file before insert into it. Disabled by default. | | engine_file_allow_create_multiple_files | allows to create a new file on each insert if format has suffix. Disabled by default. | | engine_file_skip_empty_files | allows to skip empty files while reading. Disabled by default. | | storage_file_read_method | method of reading data from storage file, one of: read, pread, mmap (only for clickhouse-local). Default value: pread for clickhouse-server, mmap for clickhouse-local. | Related {#related} Virtual columns Rename files after processing
{"source_file": "file.md"}
[ 0.03859790414571762, 0.05679038166999817, -0.00619762297719717, 0.0037379709538072348, -0.08757887035608292, 0.0349949486553669, 0.026377307251095772, 0.019567690789699554, -0.05246192589402199, 0.040450114756822586, 0.07581073045730591, -0.06924538314342499, -0.011922313831746578, -0.1085...
7e3987ec-0576-4daf-b3c1-cafb4caa465e
description: 'Allows to perform queries on data exposed via an Apache Arrow Flight server.' sidebar_label: 'arrowFlight' sidebar_position: 186 slug: /sql-reference/table-functions/arrowflight title: 'arrowFlight' doc_type: 'reference' arrowFlight Table Function Allows to perform queries on data exposed via an Apache Arrow Flight server. Syntax sql arrowFlight('host:port', 'dataset_name' [, 'username', 'password']) Arguments host:port β€” Address of the Arrow Flight server. String . dataset_name β€” Name of the dataset or descriptor available on the Arrow Flight server. String . username - Username to use with basic HTTP style authentication. password - Password to use with basic HTTP style authentication. If username and password are not specified, it means that authentication is not used (that will work only if the Arrow Flight server allows it). Returned value A table object representing the remote dataset. The schema is inferred from the Arrow Flight response. Example Query: sql SELECT * FROM arrowFlight('127.0.0.1:9005', 'sample_dataset') ORDER BY id; Result: text β”Œβ”€id─┬─name────┬─value─┐ β”‚ 1 β”‚ foo β”‚ 42.1 β”‚ β”‚ 2 β”‚ bar β”‚ 13.3 β”‚ β”‚ 3 β”‚ baz β”‚ 77.0 β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ See Also Arrow Flight table engine Apache Arrow Flight SQL
{"source_file": "arrowflight.md"}
[ 0.062304913997650146, -0.09884875267744064, -0.03924540430307388, 0.0528295673429966, -0.10634391754865646, -0.03715517371892929, 0.07192412763834, -0.02193339169025421, -0.016176749020814896, 0.0006969566456973553, 0.049722470343112946, 0.028395451605319977, 0.028261782601475716, -0.04617...
14ebb1c8-fa2a-4941-981c-512f75fa1061
description: 'Reads time series from a TimeSeries table filtered by a selector and with timestamps in a specified interval.' sidebar_label: 'timeSeriesSelector' sidebar_position: 145 slug: /sql-reference/table-functions/timeSeriesSelector title: 'timeSeriesSelector' doc_type: 'reference' timeSeriesSelector Table Function Reads time series from a TimeSeries table filtered by a selector and with timestamps in a specified interval. This function is similar to range selectors but it's used to implement instant selectors too. Syntax {#syntax} sql timeSeriesSelector('db_name', 'time_series_table', 'instant_query', min_time, max_time) timeSeriesSelector(db_name.time_series_table, 'instant_query', min_time, max_time) timeSeriesSelector('time_series_table', 'instant_query', min_time, max_time) Arguments {#arguments} db_name - The name of the database where a TimeSeries table is located. time_series_table - The name of a TimeSeries table. instant_query - An instant selector written in PromQL syntax , without @ or offset modifiers. `min_time - Start timestamp, inclusive. `max_time - End timestamp, inclusive. Returned value {#returned_value} The function returns three columns: - id - Contains the identifiers of time series matching the specified selector. - timestamp - Contains timestamps. - value - Contains values. There is no specific order for returned data. Example {#example} sql SELECT * FROM timeSeriesSelector(mytable, 'http_requests{job="prometheus"}', now() - INTERVAL 10 MINUTES, now())
{"source_file": "timeSeriesSelector.md"}
[ -0.04112020134925842, -0.016675099730491638, -0.058260466903448105, 0.05969627574086189, -0.04968385770916939, -0.0046942573972046375, 0.08181898295879364, 0.05179428309202194, -0.011277923360466957, -0.040531888604164124, 0.0028492137789726257, -0.09059730172157288, -0.04033739119768143, ...
03ff4c4d-1f02-4b2e-aa5d-1ac9e4ce0d0e
description: 'timeSeriesTags table function returns the tags table use by table db_name.time_series_table whose table engine is the TimeSeries engine.' sidebar_label: 'timeSeriesTags' sidebar_position: 145 slug: /sql-reference/table-functions/timeSeriesTags title: 'timeSeriesTags' doc_type: 'reference' timeSeriesTags Table Function timeSeriesTags(db_name.time_series_table) - Returns the tags table used by table db_name.time_series_table whose table engine is the TimeSeries engine: sql CREATE TABLE db_name.time_series_table ENGINE=TimeSeries TAGS tags_table The function also works if the tags table is inner: sql CREATE TABLE db_name.time_series_table ENGINE=TimeSeries TAGS INNER UUID '01234567-89ab-cdef-0123-456789abcdef' The following queries are equivalent: sql SELECT * FROM timeSeriesTags(db_name.time_series_table); SELECT * FROM timeSeriesTags('db_name.time_series_table'); SELECT * FROM timeSeriesTags('db_name', 'time_series_table');
{"source_file": "timeSeriesTags.md"}
[ -0.027034427970647812, -0.03947415575385094, -0.07097432017326355, 0.0045110564678907394, -0.01526168454438448, -0.09808902442455292, 0.07149920612573624, 0.07903170585632324, 0.019929001107811928, -0.03021034225821495, 0.0008692654082551599, -0.08321170508861542, 0.02561386302113533, -0.0...
7aac415b-3f81-4b3c-9fc5-e81b01922067
description: 'An extension to the paimon table function which allows processing files from Apache Paimon in parallel from many nodes in a specified cluster.' sidebar_label: 'paimonCluster' sidebar_position: 91 slug: /sql-reference/table-functions/paimonCluster title: 'paimonCluster' doc_type: 'reference' paimonCluster Table Function This is an extension to the paimon table function. Allows processing files from Apache Paimon in parallel from many nodes in a specified cluster. On initiator it creates a connection to all nodes in the cluster and dispatches each file dynamically. On the worker node it asks the initiator about the next task to process and processes it. This is repeated until all tasks are finished. Syntax {#syntax} ```sql paimonS3Cluster(cluster_name, url [,aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression]) paimonAzureCluster(cluster_name, connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method]) paimonHDFSCluster(cluster_name, path_to_table, [,format] [,compression_method]) ``` Arguments {#arguments} cluster_name β€” Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers. Description of all other arguments coincides with description of arguments in equivalent paimon table function. Returned value A table with the specified structure for reading data from cluster in the specified Paimon table. Virtual Columns {#virtual-columns} _path β€” Path to the file. Type: LowCardinality(String) . _file β€” Name of the file. Type: LowCardinality(String) . _size β€” Size of the file in bytes. Type: Nullable(UInt64) . If the file size is unknown, the value is NULL . _time β€” Last modified time of the file. Type: Nullable(DateTime) . If the time is unknown, the value is NULL . _etag β€” The etag of the file. Type: LowCardinality(String) . If the etag is unknown, the value is NULL . See Also Paimon table function
{"source_file": "paimonCluster.md"}
[ -0.07727815955877304, -0.05342334508895874, -0.08218885958194733, 0.005909301806241274, -0.034724630415439606, -0.036170925945043564, -0.020281778648495674, -0.008958895690739155, -0.031298719346523285, 0.0353979617357254, 0.043367549777030945, -0.06429588049650192, -0.015279291197657585, ...
8f73af34-5969-4d21-94a8-46a316b4008d
description: 'Allows SELECT and INSERT queries to be performed on data that are stored on a remote MySQL server.' sidebar_label: 'mysql' sidebar_position: 137 slug: /sql-reference/table-functions/mysql title: 'mysql' doc_type: 'reference' mysql Table Function Allows SELECT and INSERT queries to be performed on data that are stored on a remote MySQL server. Syntax {#syntax} sql mysql({host:port, database, table, user, password[, replace_query, on_duplicate_clause] | named_collection[, option=value [,..]]}) Arguments {#arguments}
{"source_file": "mysql.md"}
[ 0.0005998939159326255, 0.029807619750499725, -0.06865763664245605, 0.07270150631666183, -0.10433061420917511, -0.03946026787161827, 0.049293939024209976, 0.04203907400369644, -0.005108899902552366, 0.021120745688676834, 0.05483070760965347, -0.0025293200742453337, 0.13370786607265472, -0.0...
f45f18aa-481d-4bb3-8223-2a3ed6a82956
sql mysql({host:port, database, table, user, password[, replace_query, on_duplicate_clause] | named_collection[, option=value [,..]]}) Arguments {#arguments} | Argument | Description | |---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | host:port | MySQL server address. | | database | Remote database name. | | table | Remote table name. | | user | MySQL user. | | password | User password. | | replace_query | Flag that converts INSERT INTO queries to REPLACE INTO . Possible values: - 0 - The query is executed as INSERT INTO . - 1 - The query is executed as REPLACE INTO . | | on_duplicate_clause | The ON DUPLICATE KEY on_duplicate_clause expression that is added to the INSERT query. Can be specified only with replace_query = 0 (if you simultaneously pass replace_query = 1 and on_duplicate_clause , ClickHouse generates an exception). Example: INSERT INTO t (c1,c2) VALUES ('a', 2) ON DUPLICATE KEY UPDATE c2 = c2 + 1; on_duplicate_clause here is UPDATE c2 = c2 + 1 . See the MySQL documentation to find which on_duplicate_clause you can use with the ON DUPLICATE KEY clause. |
{"source_file": "mysql.md"}
[ 0.01830783300101757, 0.06329363584518433, -0.10131257772445679, 0.00997648760676384, -0.14613482356071472, -0.04071923717856407, 0.07325247675180435, 0.006310421973466873, 0.017674772068858147, 0.006464946549385786, 0.028157074004411697, -0.0935085192322731, 0.07588137686252594, -0.0584779...
033a70e5-6788-466a-b7fc-6d59ebfe91eb
Arguments also can be passed using named collections . In this case host and port should be specified separately. This approach is recommended for production environment. Simple WHERE clauses such as =, !=, >, >=, <, <= are currently executed on the MySQL server. The rest of the conditions and the LIMIT sampling constraint are executed in ClickHouse only after the query to MySQL finishes. Supports multiple replicas that must be listed by | . For example: sql SELECT name FROM mysql(`mysql{1|2|3}:3306`, 'mysql_database', 'mysql_table', 'user', 'password'); or sql SELECT name FROM mysql(`mysql1:3306|mysql2:3306|mysql3:3306`, 'mysql_database', 'mysql_table', 'user', 'password'); Returned value {#returned_value} A table object with the same columns as the original MySQL table. :::note Some data types of MySQL can be mapped to different ClickHouse types - this is addressed by query-level setting mysql_datatypes_support_level ::: :::note In the INSERT query to distinguish table function mysql(...) from table name with column names list, you must use keywords FUNCTION or TABLE FUNCTION . See examples below. ::: Examples {#examples} Table in MySQL: ``text mysql> CREATE TABLE test . test ( -> int_id INT NOT NULL AUTO_INCREMENT, -> float FLOAT NOT NULL, -> PRIMARY KEY ( int_id`)); mysql> INSERT INTO test ( int_id , float ) VALUES (1,2); mysql> SELECT * FROM test; +--------+-------+ | int_id | float | +--------+-------+ | 1 | 2 | +--------+-------+ ``` Selecting data from ClickHouse: sql SELECT * FROM mysql('localhost:3306', 'test', 'test', 'bayonet', '123'); Or using named collections : sql CREATE NAMED COLLECTION creds AS host = 'localhost', port = 3306, database = 'test', user = 'bayonet', password = '123'; SELECT * FROM mysql(creds, table='test'); text β”Œβ”€int_id─┬─float─┐ β”‚ 1 β”‚ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Replacing and inserting: sql INSERT INTO FUNCTION mysql('localhost:3306', 'test', 'test', 'bayonet', '123', 1) (int_id, float) VALUES (1, 3); INSERT INTO TABLE FUNCTION mysql('localhost:3306', 'test', 'test', 'bayonet', '123', 0, 'UPDATE int_id = int_id + 1') (int_id, float) VALUES (1, 4); SELECT * FROM mysql('localhost:3306', 'test', 'test', 'bayonet', '123'); text β”Œβ”€int_id─┬─float─┐ β”‚ 1 β”‚ 3 β”‚ β”‚ 2 β”‚ 4 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Copying data from MySQL table into ClickHouse table: ``sql CREATE TABLE mysql_copy ( id UInt64, datetime DateTime('UTC'), description` String, ) ENGINE = MergeTree ORDER BY (id,datetime); INSERT INTO mysql_copy SELECT * FROM mysql('host:port', 'database', 'table', 'user', 'password'); ``` Or if copying only an incremental batch from MySQL based on the max current id: sql INSERT INTO mysql_copy SELECT * FROM mysql('host:port', 'database', 'table', 'user', 'password') WHERE id > (SELECT max(id) FROM mysql_copy); Related {#related} The 'MySQL' table engine
{"source_file": "mysql.md"}
[ -0.020076066255569458, -0.01092404406517744, -0.09295227378606796, -0.04076560586690903, -0.09874500334262848, -0.06468939781188965, 0.08060584962368011, -0.012089862488210201, -0.0292504895478487, 0.011293027549982071, 0.0008682692423462868, -0.060643941164016724, 0.18310676515102386, -0....
9df341c5-debf-4e07-9f4b-b13705ed331e
sql INSERT INTO mysql_copy SELECT * FROM mysql('host:port', 'database', 'table', 'user', 'password') WHERE id > (SELECT max(id) FROM mysql_copy); Related {#related} The 'MySQL' table engine Using MySQL as a dictionary source mysql_datatypes_support_level mysql_map_fixed_string_to_text_in_show_columns mysql_map_string_to_text_in_show_columns mysql_max_rows_to_insert
{"source_file": "mysql.md"}
[ 0.035644952207803726, -0.043934416025877, -0.04054504632949829, -0.031005412340164185, -0.08813300728797913, -0.0161028690636158, -0.00035985963768325746, 0.050054844468832016, -0.10087543725967407, -0.00021563358313869685, 0.09452063590288162, 0.036309078335762024, 0.17298837006092072, -0...
1c7a3aa3-85cd-4306-b638-53466c2ff3a4
description: 'Evaluates a prometheus query using data from a TimeSeries table.' sidebar_label: 'prometheusQueryRange' sidebar_position: 145 slug: /sql-reference/table-functions/prometheusQueryRange title: 'prometheusQueryRange' doc_type: 'reference' prometheusQuery Table Function Evaluates a prometheus query using data from a TimeSeries table over a range of evaluation times. Syntax {#syntax} sql prometheusQueryRange('db_name', 'time_series_table', 'promql_query', start_time, end_time, step) prometheusQueryRange(db_name.time_series_table, 'promql_query', start_time, end_time, step) prometheusQueryRange('time_series_table', 'promql_query', start_time, end_time, step) Arguments {#arguments} db_name - The name of the database where a TimeSeries table is located. time_series_table - The name of a TimeSeries table. promql_query - A query written in PromQL syntax . start_time - The start time of the evaluation range. end_time - The end time of the evaluation range. step - The step used to iterate the evaluation time from start_time to end_time (inclusively). Returned value {#returned_value} The function can returns different columns depending on the result type of the query passed to parameter promql_query : | Result Type | Result Columns | Example | |-------------|----------------|---------| | vector | tags Array(Tuple(String, String)), timestamp TimestampType, value ValueType | prometheusQuery(mytable, 'up') | | matrix | tags Array(Tuple(String, String)), time_series Array(Tuple(TimestampType, ValueType)) | prometheusQuery(mytable, 'up[1m]') | | scalar | scalar ValueType | prometheusQuery(mytable, '1h30m') | | string | string String | prometheusQuery(mytable, '"abc"') | Example {#example} sql SELECT * FROM prometheusQueryRange(mytable, 'rate(http_requests{job="prometheus"}[10m])[1h:10m]', now() - INTERVAL 10 MINUTES, now(), INTERVAL 1 MINUTE)
{"source_file": "prometheusQueryRange.md"}
[ -0.02338244765996933, 0.03711630031466484, -0.05017189309000969, 0.04755428433418274, -0.06452813744544983, -0.05790035054087639, 0.03564761206507683, 0.07282676547765732, -0.04040215536952019, -0.009542102925479412, 0.009362133219838142, -0.07930000871419907, 0.03543577343225479, -0.04854...
dc998e71-99f2-4b56-9fde-4d37668f4701
description: 'The table function allows to read data from the YTsaurus cluster.' sidebar_label: 'ytsaurus' sidebar_position: 85 slug: /sql-reference/table-functions/ytsaurus title: 'ytsaurus' doc_type: 'reference' import ExperimentalBadge from '@theme/badges/ExperimentalBadge'; ytsaurus Table Function The table function allows to read data from the YTsaurus cluster. Syntax {#syntax} sql ytsaurus(http_proxy_url, cypress_path, oauth_token, format) :::info This is an experimental feature that may change in backwards-incompatible ways in the future releases. Enable usage of the YTsaurus table function with allow_experimental_ytsaurus_table_function setting. Input the command set allow_experimental_ytsaurus_table_function = 1 . ::: Arguments {#arguments} http_proxy_url β€” URL to the YTsaurus http proxy. cypress_path β€” Cypress path to the data source. oauth_token β€” OAuth token. format β€” The format of the data source. Returned value A table with the specified structure for reading data in the specified ytsaurus cypress path in YTsaurus cluster. See Also ytsaurus engine
{"source_file": "ytsaurus.md"}
[ 0.000572215998545289, 0.0014745036605745554, -0.08897293359041214, 0.03311624005436897, 0.025657229125499725, -0.09723122417926788, 0.040383126586675644, 0.04462648183107376, -0.07401590794324875, 0.013046981766819954, 0.040467433631420135, 0.051519155502319336, 0.0443057082593441, -0.0222...
abeafeae-667b-4b57-baf1-ed66794e2f62
description: 'Represents the contents of some projection in MergeTree tables. It can be used for introspection.' sidebar_label: 'mergeTreeProjection' sidebar_position: 77 slug: /sql-reference/table-functions/mergeTreeProjection title: 'mergeTreeProjection' doc_type: 'reference' mergeTreeProjection Table Function Represents the contents of some projection in MergeTree tables. It can be used for introspection. Syntax {#syntax} sql mergeTreeProjection(database, table, projection) Arguments {#arguments} | Argument | Description | |--------------|--------------------------------------------| | database | The database name to read projection from. | | table | The table name to read projection from. | | projection | The projection to read from. | Returned value {#returned_value} A table object with columns provided by given projection. Usage Example {#usage-example} ``sql CREATE TABLE test ( user_id UInt64, item_id` UInt64, PROJECTION order_by_item_id ( SELECT _part_offset ORDER BY item_id ) ) ENGINE = MergeTree ORDER BY user_id; INSERT INTO test SELECT number, 100 - number FROM numbers(5); ``` sql SELECT *, _part_offset FROM mergeTreeProjection(currentDatabase(), test, order_by_item_id); text β”Œβ”€item_id─┬─_parent_part_offset─┬─_part_offset─┐ 1. β”‚ 96 β”‚ 4 β”‚ 0 β”‚ 2. β”‚ 97 β”‚ 3 β”‚ 1 β”‚ 3. β”‚ 98 β”‚ 2 β”‚ 2 β”‚ 4. β”‚ 99 β”‚ 1 β”‚ 3 β”‚ 5. β”‚ 100 β”‚ 0 β”‚ 4 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ sql DESCRIBE mergeTreeProjection(currentDatabase(), test, order_by_item_id) SETTINGS describe_compact_output = 1; text β”Œβ”€name────────────────┬─type───┐ 1. β”‚ item_id β”‚ UInt64 β”‚ 2. β”‚ _parent_part_offset β”‚ UInt64 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜
{"source_file": "mergeTreeProjection.md"}
[ 0.04834588244557381, 0.00685629528015852, -0.05688540264964104, 0.04530526325106621, -0.013431860134005547, -0.00281611573882401, 0.05851364508271217, 0.09604036062955856, -0.05740031972527504, 0.02788163349032402, -0.004929517861455679, -0.053459156304597855, 0.039730172604322433, -0.0770...
6ca474a0-d9fc-469f-bc72-59d2cf3f75c8
description: 'Provides a table-like interface to select/insert files in Amazon S3 and Google Cloud Storage. This table function is similar to the hdfs function, but provides S3-specific features.' keywords: ['s3', 'gcs', 'bucket'] sidebar_label: 's3' sidebar_position: 180 slug: /sql-reference/table-functions/s3 title: 's3 Table Function' doc_type: 'reference' import ExperimentalBadge from '@theme/badges/ExperimentalBadge'; import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge'; s3 Table Function Provides a table-like interface to select/insert files in Amazon S3 and Google Cloud Storage . This table function is similar to the hdfs function , but provides S3-specific features. If you have multiple replicas in your cluster, you can use the s3Cluster function instead to parallelize inserts. When using the s3 table function with INSERT INTO...SELECT , data is read and inserted in a streaming fashion. Only a few blocks of data reside in memory while the blocks are continuously read from S3 and pushed into the destination table. Syntax {#syntax} sql s3(url [, NOSIGN | access_key_id, secret_access_key, [session_token]] [,format] [,structure] [,compression_method],[,headers], [,partition_strategy], [,partition_columns_in_data_file]) s3(named_collection[, option=value [,..]]) :::tip GCS The S3 Table Function integrates with Google Cloud Storage by using the GCS XML API and HMAC keys. See the Google interoperability docs for more details about the endpoint and HMAC. For GCS, substitute your HMAC key and HMAC secret where you see access_key_id and secret_access_key . ::: Parameters s3 table function supports the following plain parameters:
{"source_file": "s3.md"}
[ -0.031171215698122978, -0.09977466613054276, -0.05782623961567879, 0.02200458012521267, 0.08154185861349106, 0.004037154372781515, -0.02457355707883835, -0.033408816903829575, 0.038114096969366074, 0.07018894702196121, 0.007780506741255522, 0.011586697772145271, 0.1253204196691513, -0.0787...
8a47e82c-bb49-48c7-bb12-08e1f71a63a0
| Parameter | Description | |-----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | url | Bucket url with path to file. Supports following wildcards in readonly mode: * , ** , ? , {abc,def} and {N..M} where N , M β€” numbers, 'abc' , 'def' β€” strings. For more information see here . | | NOSIGN | If this keyword is provided in place of credentials, all the requests will not be signed. | | access_key_id and secret_access_key | Keys that specify credentials to use with given endpoint. Optional. | | session_token | Session token to use with the given keys. Optional when passing keys. | | format | The format of the file. | | structure | Structure of the table. Format 'column1_name column1_type, column2_name column2_type, ...'
{"source_file": "s3.md"}
[ 0.010136748664081097, 0.07613538205623627, -0.043113913387060165, -0.007272630929946899, -0.09032130241394043, 0.024296408519148827, 0.02009110525250435, 0.04225422441959381, -0.00288283359259367, -0.06826462596654892, 0.04134080186486244, -0.04884437844157219, 0.0006245627882890403, -0.07...
5950278f-e8bc-4a20-ba46-b94df179e769
| structure | Structure of the table. Format 'column1_name column1_type, column2_name column2_type, ...' . | | compression_method | Parameter is optional. Supported values: none , gzip or gz , brotli or br , xz or LZMA , zstd or zst . By default, it will autodetect compression method by file extension. | | headers | Parameter is optional. Allows headers to be passed in the S3 request. Pass in the format headers(key=value) e.g. headers('x-amz-request-payer' = 'requester') . | | partition_strategy | Parameter is optional. Supported values: WILDCARD or HIVE . WILDCARD requires a {_partition_id} in the path, which is replaced with the partition key. HIVE does not allow wildcards, assumes the path is the table root, and generates Hive-style partitioned directories with Snowflake IDs as filenames and the file format as the extension. Defaults to WILDCARD | | partition_columns_in_data_file | Parameter is optional. Only used with HIVE partition strategy. Tells ClickHouse whether to expect partition columns to be written in the data file. Defaults false . | | storage_class_name | Parameter is optional. Supported values: STANDARD or INTELLIGENT_TIERING . Allow to specify AWS S3 Intelligent Tiering . Defaults to STANDARD . |
{"source_file": "s3.md"}
[ -0.03587625175714493, 0.019887445494532585, -0.09463503956794739, -0.03281304985284805, 0.0635378509759903, -0.0035064807161688805, 0.027650410309433937, 0.025457914918661118, -0.06557786464691162, 0.03611557558178902, -0.05482719838619232, -0.015224042348563671, 0.04002634435892105, -0.02...
46bfe148-1d26-4ef5-94bd-5fd3b44bd82c
:::note GCS The GCS url is in this format as the endpoint for the Google XML API is different than the JSON API: text https://storage.googleapis.com/<bucket>/<folder>/<filename(s)> and not ~~https://storage.cloud.google.com~~. ::: Arguments can also be passed using named collections . In this case url , access_key_id , secret_access_key , format , structure , compression_method work in the same way, and some extra parameters are supported: | Argument | Description | |-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | filename | appended to the url if specified. | | use_environment_credentials | enabled by default, allows passing extra parameters using environment variables AWS_CONTAINER_CREDENTIALS_RELATIVE_URI , AWS_CONTAINER_CREDENTIALS_FULL_URI , AWS_CONTAINER_AUTHORIZATION_TOKEN , AWS_EC2_METADATA_DISABLED . | | no_sign_request | disabled by default. | | expiration_window_seconds | default value is 120. | Returned value {#returned_value} A table with the specified structure for reading or writing data in the specified file. Examples {#examples} Selecting the first 5 rows from the table from S3 file https://datasets-documentation.s3.eu-west-3.amazonaws.com/aapl_stock.csv : sql SELECT * FROM s3( 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/aapl_stock.csv', 'CSVWithNames' ) LIMIT 5; response β”Œβ”€β”€β”€β”€β”€β”€β”€Date─┬────Open─┬────High─┬─────Low─┬───Close─┬───Volume─┬─OpenInt─┐ β”‚ 1984-09-07 β”‚ 0.42388 β”‚ 0.42902 β”‚ 0.41874 β”‚ 0.42388 β”‚ 23220030 β”‚ 0 β”‚ β”‚ 1984-09-10 β”‚ 0.42388 β”‚ 0.42516 β”‚ 0.41366 β”‚ 0.42134 β”‚ 18022532 β”‚ 0 β”‚ β”‚ 1984-09-11 β”‚ 0.42516 β”‚ 0.43668 β”‚ 0.42516 β”‚ 0.42902 β”‚ 42498199 β”‚ 0 β”‚ β”‚ 1984-09-12 β”‚ 0.42902 β”‚ 0.43157 β”‚ 0.41618 β”‚ 0.41618 β”‚ 37125801 β”‚ 0 β”‚ β”‚ 1984-09-13 β”‚ 0.43927 β”‚ 0.44052 β”‚ 0.43927 β”‚ 0.43927 β”‚ 57822062 β”‚ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ :::note ClickHouse uses filename extensions to determine the format of the data. For example, we could have run the previous command without the CSVWithNames :
{"source_file": "s3.md"}
[ -0.10435760021209717, 0.05619758367538452, -0.02795345149934292, -0.010464427061378956, -0.03292199969291687, -0.03447563573718071, -0.01029317919164896, -0.0139003312215209, 0.05367877334356308, 0.014947894960641861, 0.015406698919832706, -0.02438465692102909, 0.0648055151104927, -0.04634...
5ac68c79-08b4-4c61-ab38-67331b13c782
:::note ClickHouse uses filename extensions to determine the format of the data. For example, we could have run the previous command without the CSVWithNames : sql SELECT * FROM s3( 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/aapl_stock.csv' ) LIMIT 5; ClickHouse also can determine the compression method of the file. For example, if the file was zipped up with a .csv.gz extension, ClickHouse would decompress the file automatically. ::: :::note Parquet files with names like *.parquet.snappy or *.parquet.zstd can confuse ClickHouse and cause TOO_LARGE_COMPRESSED_BLOCK or ZSTD_DECODER_FAILED errors. This is because ClickHouse would attempt to read the entire file as Snappy or ZSTD-encoded data when, in fact, Parquet applies compression at the row-group and column level. Parquet metadata already specifies the per-column compression, and so the file extension is superfluous. You can just use compression_method = 'none' in such cases: sql SELECT * FROM s3( 'https://<my-bucket>.s3.<my-region>.amazonaws.com/path/to/my-data.parquet.snappy', compression_format = 'none' ); ::: Usage {#usage} Suppose that we have several files with following URIs on S3: 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/some_prefix/some_file_1.csv' 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/some_prefix/some_file_2.csv' 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/some_prefix/some_file_3.csv' 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/some_prefix/some_file_4.csv' 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/another_prefix/some_file_1.csv' 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/another_prefix/some_file_2.csv' 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/another_prefix/some_file_3.csv' 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/another_prefix/some_file_4.csv' Count the number of rows in files ending with numbers from 1 to 3: sql SELECT count(*) FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') text β”Œβ”€count()─┐ β”‚ 18 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Count the total amount of rows in all files in these two directories: sql SELECT count(*) FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/my-test-bucket-768/{some,another}_prefix/*', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') text β”Œβ”€count()─┐ β”‚ 24 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ :::tip If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use ? . ::: Count the total amount of rows in files named file-000.csv , file-001.csv , ... , file-999.csv :
{"source_file": "s3.md"}
[ -0.0866062268614769, 0.01475498080253601, -0.10060052573680878, -0.07171210646629333, 0.017494438216090202, -0.10597243905067444, -0.020974060520529747, -0.013782021589577198, 0.015301892533898354, 0.013039746321737766, 0.011254331097006798, 0.03380909562110901, -0.0184528436511755, -0.073...
b52e2ce4-f601-464d-9828-88e1611ba1fc
Count the total amount of rows in files named file-000.csv , file-001.csv , ... , file-999.csv : sql SELECT count(*) FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32'); text β”Œβ”€count()─┐ β”‚ 12 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Insert data into file test-data.csv.gz : sql INSERT INTO FUNCTION s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') VALUES ('test-data', 1), ('test-data-2', 2); Insert data into file test-data.csv.gz from existing table: sql INSERT INTO FUNCTION s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') SELECT name, value FROM existing_table; Glob ** can be used for recursive directory traversal. Consider the below example, it will fetch all files from my-test-bucket-768 directory recursively: sql SELECT * FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/**', 'CSV', 'name String, value UInt32', 'gzip'); The below get data from all test-data.csv.gz files from any folder inside my-test-bucket directory recursively: sql SELECT * FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/**/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip'); Note. It is possible to specify custom URL mappers in the server configuration file. Example: sql SELECT * FROM s3('s3://clickhouse-public-datasets/my-test-bucket-768/**/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip'); The URL 's3://clickhouse-public-datasets/my-test-bucket-768/**/test-data.csv.gz' would be replaced to 'http://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/**/test-data.csv.gz' Custom mapper can be added into config.xml : xml <url_scheme_mappers> <s3> <to>https://{bucket}.s3.amazonaws.com</to> </s3> <gs> <to>https://{bucket}.storage.googleapis.com</to> </gs> <oss> <to>https://{bucket}.oss.aliyuncs.com</to> </oss> </url_scheme_mappers> For production use cases it is recommended to use named collections . Here is the example: ```sql CREATE NAMED COLLECTION creds AS access_key_id = ' ', secret_access_key = ' '; SELECT count(*) FROM s3(creds, url='https://s3-object-url.csv') ``` Partitioned Write {#partitioned-write} Partition Strategy {#partition-strategy} Supported for INSERT queries only. WILDCARD (default): Replaces the {_partition_id} wildcard in the file path with the actual partition key. HIVE implements hive style partitioning for reads & writes. It generates files using the following format: <prefix>/<key1=val1/key2=val2...>/<snowflakeid>.<toLower(file_format)> . Example of HIVE partition strategy
{"source_file": "s3.md"}
[ 0.00828046165406704, -0.034487344324588776, -0.1312963366508484, 0.03376973420381546, -0.0038559078238904476, -0.010386361740529537, 0.02019055187702179, 0.032708968967199326, 0.027358800172805786, 0.08814410865306854, 0.04869018867611885, -0.07885506004095078, 0.09362653642892838, -0.1223...
b35cfefa-6c6d-47d8-8f66-e4e2b659754e
Example of HIVE partition strategy sql INSERT INTO FUNCTION s3(s3_conn, filename='t_03363_function', format=Parquet, partition_strategy='hive') PARTITION BY (year, country) SELECT 2020 as year, 'Russia' as country, 1 as id; ```result SELECT _path, * FROM s3(s3_conn, filename='t_03363_function/**.parquet'); β”Œβ”€_path──────────────────────────────────────────────────────────────────────┬─id─┬─country─┬─year─┐ 1. β”‚ test/t_03363_function/year=2020/country=Russia/7351295896279887872.parquet β”‚ 1 β”‚ Russia β”‚ 2020 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ ``` Examples of WILDCARD partition strategy Using partition ID in a key creates separate files: sql INSERT INTO TABLE FUNCTION s3('http://bucket.amazonaws.com/my_bucket/file_{_partition_id}.csv', 'CSV', 'a String, b UInt32, c UInt32') PARTITION BY a VALUES ('x', 2, 3), ('x', 4, 5), ('y', 11, 12), ('y', 13, 14), ('z', 21, 22), ('z', 23, 24); As a result, the data is written into three files: file_x.csv , file_y.csv , and file_z.csv . Using partition ID in a bucket name creates files in different buckets: sql INSERT INTO TABLE FUNCTION s3('http://bucket.amazonaws.com/my_bucket_{_partition_id}/file.csv', 'CSV', 'a UInt32, b UInt32, c UInt32') PARTITION BY a VALUES (1, 2, 3), (1, 4, 5), (10, 11, 12), (10, 13, 14), (20, 21, 22), (20, 23, 24); As a result, the data is written into three files in different buckets: my_bucket_1/file.csv , my_bucket_10/file.csv , and my_bucket_20/file.csv . Accessing public buckets {#accessing-public-buckets} ClickHouse tries to fetch credentials from many different types of sources. Sometimes, it can produce problems when accessing some buckets that are public causing the client to return 403 error code. This issue can be avoided by using NOSIGN keyword, forcing the client to ignore all the credentials, and not sign the requests. sql SELECT * FROM s3( 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/aapl_stock.csv', NOSIGN, 'CSVWithNames' ) LIMIT 5; Using S3 credentials (ClickHouse Cloud) {#using-s3-credentials-clickhouse-cloud} For non-public buckets, users can pass an aws_access_key_id and aws_secret_access_key to the function. For example: sql SELECT count() FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/mta/*.tsv', '<KEY>', '<SECRET>','TSVWithNames') This is appropriate for one-off accesses or in cases where credentials can easily be rotated. However, this is not recommended as a long-term solution for repeated access or where credentials are sensitive. In this case, we recommend users rely on role-based access. Role-based access for S3 in ClickHouse Cloud is documented here . Once configured, a roleARN can be passed to the s3 function via an extra_credentials parameter. For example:
{"source_file": "s3.md"}
[ -0.012128000147640705, -0.045588377863168716, -0.05850740894675255, -0.046729397028684616, 0.01765337400138378, -0.033051781356334686, 0.024647587910294533, -0.0014387810369953513, 0.0051508680917322636, 0.038003042340278625, 0.009022665210068226, -0.04499349743127823, 0.029188739135861397, ...
7810067b-d665-48ff-ae72-ccd852202e98
Role-based access for S3 in ClickHouse Cloud is documented here . Once configured, a roleARN can be passed to the s3 function via an extra_credentials parameter. For example: sql SELECT count() FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/mta/*.tsv','CSVWithNames',extra_credentials(role_arn = 'arn:aws:iam::111111111111:role/ClickHouseAccessRole-001')) Further examples can be found here Working with archives {#working-with-archives} Suppose that we have several archive files with following URIs on S3: 'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-10.csv.zip' 'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-11.csv.zip' 'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-12.csv.zip' Extracting data from these archives is possible using ::. Globs can be used both in the url part as well as in the part after :: (responsible for the name of a file inside the archive). sql SELECT * FROM s3( 'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-1{0..2}.csv.zip :: *.csv' ); :::note ClickHouse supports three archive formats: ZIP TAR 7Z While ZIP and TAR archives can be accessed from any supported storage location, 7Z archives can only be read from the local filesystem where ClickHouse is installed. ::: Inserting Data {#inserting-data} Note that rows can only be inserted into new files. There are no merge cycles or file split operations. Once a file is written, subsequent inserts will fail. See more details here . Virtual Columns {#virtual-columns} _path β€” Path to the file. Type: LowCardinality(String) . In case of archive, shows path in a format: "{path_to_archive}::{path_to_file_inside_archive}" _file β€” Name of the file. Type: LowCardinality(String) . In case of archive shows name of the file inside the archive. _size β€” Size of the file in bytes. Type: Nullable(UInt64) . If the file size is unknown, the value is NULL . In case of archive shows uncompressed file size of the file inside the archive. _time β€” Last modified time of the file. Type: Nullable(DateTime) . If the time is unknown, the value is NULL . use_hive_partitioning setting {#hive-style-partitioning} This is a hint for ClickHouse to parse hive style partitioned files upon reading time. It has no effect on writing. For symmetrical reads and writes, use the partition_strategy argument. When setting use_hive_partitioning is set to 1, ClickHouse will detect Hive-style partitioning in the path ( /name=value/ ) and will allow to use partition columns as virtual columns in the query. These virtual columns will have the same names as in the partitioned path, but starting with _ . Example sql SELECT * FROM s3('s3://data/path/date=*/country=*/code=*/*.parquet') WHERE date > '2020-01-01' AND country = 'Netherlands' AND code = 42; Accessing requester-pays buckets {#accessing-requester-pays-buckets}
{"source_file": "s3.md"}
[ -0.057201940566301346, 0.0180471520870924, -0.1163536012172699, 0.008985576219856739, -0.003400590503588319, -0.012389757670462132, -0.012036853469908237, -0.09767065942287445, 0.044332876801490784, -0.00843570102006197, -0.011191647499799728, -0.03414757177233696, 0.08389448374509811, -0....
67972e27-da9d-4675-a9fd-698368f4c9b9
Accessing requester-pays buckets {#accessing-requester-pays-buckets} To access a requester-pays bucket, a header x-amz-request-payer = requester must be passed in any requests. This is achieved by passing the parameter headers('x-amz-request-payer' = 'requester') to the s3 function. For example: ```sql SELECT count() AS num_rows, uniqExact(_file) AS num_files FROM s3('https://coiled-datasets-rp.s3.us-east-1.amazonaws.com/1trc/measurements-100*.parquet', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY', headers('x-amz-request-payer' = 'requester')) β”Œβ”€β”€β”€num_rows─┬─num_files─┐ β”‚ 1110000000 β”‚ 111 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ 1 row in set. Elapsed: 3.089 sec. Processed 1.09 billion rows, 0.00 B (353.55 million rows/s., 0.00 B/s.) Peak memory usage: 192.27 KiB. ``` Storage Settings {#storage-settings} s3_truncate_on_insert - allows to truncate file before insert into it. Disabled by default. s3_create_new_file_on_insert - allows to create a new file on each insert if format has suffix. Disabled by default. s3_skip_empty_files - allows to skip empty files while reading. Enabled by default. Nested Avro Schemas {#nested-avro-schemas} When reading Avro files that contain nested records which diverge across files (for example, some files have an extra field inside a nested object), ClickHouse may return an error such as: The number of leaves in record doesn't match the number of elements in tuple... This happens because ClickHouse expects all nested record structures to match the same schema. To handle this scenario, you can: Use schema_inference_mode='union' to merge different nested record schemas, or Manually align your nested structures and enable use_structure_from_insertion_table_in_table_functions=1 . :::note[Performance note] schema_inference_mode='union' may take longer on very large S3 datasets because it must scan each file to infer the schema. ::: Example ```sql INSERT INTO data_stage SELECT id, data FROM s3('https://bucket-name/*.avro', 'Avro') SETTINGS schema_inference_mode='union'; Related {#related} S3 engine Integrating S3 with ClickHouse
{"source_file": "s3.md"}
[ -0.08904864639043808, 0.04002683237195015, -0.1001150980591774, -0.01800677552819252, 0.007316283881664276, -0.06978461146354675, 0.054252512753009796, -0.038778916001319885, 0.0536138117313385, 0.07030262798070908, -0.020160812884569168, -0.0646200180053711, 0.07499583065509796, -0.091232...
fff4c86c-afec-4106-8f59-3dd835e3e258
description: 'Displays the dictionary data as a ClickHouse table. Works the same way as the Dictionary engine.' sidebar_label: 'dictionary' sidebar_position: 47 slug: /sql-reference/table-functions/dictionary title: 'dictionary' doc_type: 'reference' dictionary Table Function Displays the dictionary data as a ClickHouse table. Works the same way as Dictionary engine. Syntax {#syntax} sql dictionary('dict') Arguments {#arguments} dict β€” A dictionary name. String . Returned value {#returned_value} A ClickHouse table. Examples {#examples} Input table dictionary_source_table : text β”Œβ”€id─┬─value─┐ β”‚ 0 β”‚ 0 β”‚ β”‚ 1 β”‚ 1 β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Create a dictionary: sql CREATE DICTIONARY new_dictionary(id UInt64, value UInt64 DEFAULT 0) PRIMARY KEY id SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'dictionary_source_table')) LAYOUT(DIRECT()); Query: sql SELECT * FROM dictionary('new_dictionary'); Result: text β”Œβ”€id─┬─value─┐ β”‚ 0 β”‚ 0 β”‚ β”‚ 1 β”‚ 1 β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Related {#related} Dictionary engine
{"source_file": "dictionary.md"}
[ -0.0009567920351400971, -0.022202080115675926, -0.07555323094129562, 0.022946080192923546, -0.08395254611968994, -0.08831015974283218, 0.0656113550066948, -0.013893100433051586, -0.051712773740291595, -0.0060119773261249065, 0.057396143674850464, -0.014384145848453045, 0.09705939888954163, ...
9024c8b4-6b94-4c49-8bb6-7d7e928f5ac9
description: 'Creates a table from files in HDFS. This table function is similar to the url and file table functions.' sidebar_label: 'hdfs' sidebar_position: 80 slug: /sql-reference/table-functions/hdfs title: 'hdfs' doc_type: 'reference' import ExperimentalBadge from '@theme/badges/ExperimentalBadge'; import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge'; hdfs Table Function Creates a table from files in HDFS. This table function is similar to the url and file table functions. Syntax {#syntax} sql hdfs(URI, format, structure) Arguments {#arguments} | Argument | Description | |-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | URI | The relative URI to the file in HDFS. Path to file support following globs in readonly mode: * , ? , {abc,def} and {N..M} where N , M β€” numbers, 'abc', 'def' β€” strings. | | format | The format of the file. | | structure | Structure of the table. Format 'column1_name column1_type, column2_name column2_type, ...' . | Returned value {#returned_value} A table with the specified structure for reading or writing data in the specified file. example Table from hdfs://hdfs1:9000/test and selection of the first two rows from it: sql SELECT * FROM hdfs('hdfs://hdfs1:9000/test', 'TSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 2 text β”Œβ”€column1─┬─column2─┬─column3─┐ β”‚ 1 β”‚ 2 β”‚ 3 β”‚ β”‚ 3 β”‚ 2 β”‚ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Globs in path {#globs_in_path} Paths may use globbing. Files must match the whole path pattern, not only the suffix or prefix. * β€” Represents arbitrarily many characters except / but including the empty string. ** β€” Represents all files inside a folder recursively. ? β€” Represents an arbitrary single character. {some_string,another_string,yet_another_one} β€” Substitutes any of strings 'some_string', 'another_string', 'yet_another_one' . The strings can contain the / symbol. {N..M} β€” Represents any number >= N and <= M . Constructions with {} are similar to the remote and file table functions. Example Suppose that we have several files with following URIs on HDFS: 'hdfs://hdfs1:9000/some_dir/some_file_1' 'hdfs://hdfs1:9000/some_dir/some_file_2' 'hdfs://hdfs1:9000/some_dir/some_file_3' 'hdfs://hdfs1:9000/another_dir/some_file_1' 'hdfs://hdfs1:9000/another_dir/some_file_2' 'hdfs://hdfs1:9000/another_dir/some_file_3'
{"source_file": "hdfs.md"}
[ 0.020528698340058327, -0.0435086153447628, -0.06333281099796295, 0.04262195900082588, 0.026040758937597275, 0.007725460920482874, -0.005246351007372141, 0.04918534681200981, -0.027663862332701683, 0.05478089675307274, -0.020225854590535164, -0.01620287075638771, 0.10001025348901749, -0.046...
8e0a451e-a9d9-4050-b49a-84c648f5910d
'hdfs://hdfs1:9000/some_dir/some_file_3' 'hdfs://hdfs1:9000/another_dir/some_file_1' 'hdfs://hdfs1:9000/another_dir/some_file_2' 'hdfs://hdfs1:9000/another_dir/some_file_3' Query the amount of rows in these files: sql SELECT count(*) FROM hdfs('hdfs://hdfs1:9000/{some,another}_dir/some_file_{1..3}', 'TSV', 'name String, value UInt32') Query the amount of rows in all files of these two directories: sql SELECT count(*) FROM hdfs('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV', 'name String, value UInt32') :::note If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use ? . ::: Example Query the data from files named file000 , file001 , ... , file999 : sql SELECT count(*) FROM hdfs('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name String, value UInt32') Virtual Columns {#virtual-columns} _path β€” Path to the file. Type: LowCardinality(String) . _file β€” Name of the file. Type: LowCardinality(String) . _size β€” Size of the file in bytes. Type: Nullable(UInt64) . If the size is unknown, the value is NULL . _time β€” Last modified time of the file. Type: Nullable(DateTime) . If the time is unknown, the value is NULL . use_hive_partitioning setting {#hive-style-partitioning} When setting use_hive_partitioning is set to 1, ClickHouse will detect Hive-style partitioning in the path ( /name=value/ ) and will allow to use partition columns as virtual columns in the query. These virtual columns will have the same names as in the partitioned path, but starting with _ . Example Use virtual column, created with Hive-style partitioning sql SELECT * FROM HDFS('hdfs://hdfs1:9000/data/path/date=*/country=*/code=*/*.parquet') WHERE _date > '2020-01-01' AND _country = 'Netherlands' AND _code = 42; Storage Settings {#storage-settings} hdfs_truncate_on_insert - allows to truncate file before insert into it. Disabled by default. hdfs_create_new_file_on_insert - allows to create a new file on each insert if format has suffix. Disabled by default. hdfs_skip_empty_files - allows to skip empty files while reading. Disabled by default. Related {#related} Virtual columns
{"source_file": "hdfs.md"}
[ -0.02839353121817112, -0.038145873695611954, -0.04102623090147972, 0.0005527015891857445, -0.021073659881949425, 0.04495612531900406, 0.11142746359109879, 0.08804447203874588, 0.004559393972158432, -0.01852494478225708, 0.027884528040885925, -0.04678674787282944, 0.08246809989213943, 0.005...
2d49468f-7065-4442-b8a4-6eb2f85a884e
description: 'Perturbs a JSON string with random variations.' sidebar_label: 'fuzzJSON' sidebar_position: 75 slug: /sql-reference/table-functions/fuzzJSON title: 'fuzzJSON' doc_type: 'reference' fuzzJSON Table Function Perturbs a JSON string with random variations. Syntax {#syntax} sql fuzzJSON({ named_collection [, option=value [,..]] | json_str[, random_seed] }) Arguments {#arguments} | Argument | Description | |------------------------------------|---------------------------------------------------------------------------------------------| | named_collection | A NAMED COLLECTION . | | option=value | Named collection optional parameters and their values. | | json_str (String) | The source string representing structured data in JSON format. | | random_seed (UInt64) | Manual random seed for producing stable results. | | reuse_output (boolean) | Reuse the output from a fuzzing process as input for the next fuzzer. | | malform_output (boolean) | Generate a string that cannot be parsed as a JSON object. | | max_output_length (UInt64) | Maximum allowable length of the generated or perturbed JSON string. | | probability (Float64) | The probability to fuzz a JSON field (a key-value pair). Must be within [0, 1] range. | | max_nesting_level (UInt64) | The maximum allowed depth of nested structures within the JSON data. | | max_array_size (UInt64) | The maximum allowed size of a JSON array. | | max_object_size (UInt64) | The maximum allowed number of fields on a single level of a JSON object. | | max_string_value_length (UInt64) | The maximum length of a String value. | | min_key_length (UInt64) | The minimum key length. Should be at least 1. | | max_key_length (UInt64) | The maximum key length. Should be greater or equal than the min_key_length , if specified. | Returned value {#returned_value} A table object with a a single column containing perturbed JSON strings. Usage Example {#usage-example} sql CREATE NAMED COLLECTION json_fuzzer AS json_str='{}'; SELECT * FROM fuzzJSON(json_fuzzer) LIMIT 3; text {"52Xz2Zd4vKNcuP2":true} {"UPbOhOQAdPKIg91":3405264103600403024} {"X0QUWu8yT":[]} sql SELECT * FROM fuzzJSON(json_fuzzer, json_str='{"name" : "value"}', random_seed=1234) LIMIT 3;
{"source_file": "fuzzJSON.md"}
[ -0.03228607401251793, 0.07872070372104645, -0.041842930018901825, 0.018652712926268578, -0.07519319653511047, -0.027024077251553535, 0.09671563655138016, 0.041293494403362274, -0.000642744533251971, -0.05537118390202522, 0.0344608873128891, -0.057555295526981354, 0.010404971428215504, -0.0...
e2f66d0a-c467-4cf3-9e8d-d4dd8a78e607
text {"52Xz2Zd4vKNcuP2":true} {"UPbOhOQAdPKIg91":3405264103600403024} {"X0QUWu8yT":[]} sql SELECT * FROM fuzzJSON(json_fuzzer, json_str='{"name" : "value"}', random_seed=1234) LIMIT 3; text {"key":"value", "mxPG0h1R5":"L-YQLv@9hcZbOIGrAn10%GA"} {"BRE3":true} {"key":"value", "SWzJdEJZ04nrpSfy":[{"3Q23y":[]}]} sql SELECT * FROM fuzzJSON(json_fuzzer, json_str='{"students" : ["Alice", "Bob"]}', reuse_output=true) LIMIT 3; text {"students":["Alice", "Bob"], "nwALnRMc4pyKD9Krv":[]} {"students":["1rNY5ZNs0wU&82t_P", "Bob"], "wLNRGzwDiMKdw":[{}]} {"xeEk":["1rNY5ZNs0wU&82t_P", "Bob"], "wLNRGzwDiMKdw":[{}, {}]} sql SELECT * FROM fuzzJSON(json_fuzzer, json_str='{"students" : ["Alice", "Bob"]}', max_output_length=512) LIMIT 3; text {"students":["Alice", "Bob"], "BREhhXj5":true} {"NyEsSWzJdeJZ04s":["Alice", 5737924650575683711, 5346334167565345826], "BjVO2X9L":true} {"NyEsSWzJdeJZ04s":["Alice", 5737924650575683711, 5346334167565345826], "BjVO2X9L":true, "k1SXzbSIz":[{}]} sql SELECT * FROM fuzzJSON('{"id":1}', 1234) LIMIT 3; text {"id":1, "mxPG0h1R5":"L-YQLv@9hcZbOIGrAn10%GA"} {"BRjE":16137826149911306846} {"XjKE":15076727133550123563} sql SELECT * FROM fuzzJSON(json_nc, json_str='{"name" : "FuzzJSON"}', random_seed=1337, malform_output=true) LIMIT 3; text U"name":"FuzzJSON*"SpByjZKtr2VAyHCO"falseh {"name"keFuzzJSON, "g6vVO7TCIk":jTt^ {"DBhz":YFuzzJSON5}
{"source_file": "fuzzJSON.md"}
[ -0.032111626118421555, 0.06031133234500885, -0.0034916207659989595, 0.00968247465789318, -0.060620084404945374, -0.028976747766137123, 0.09885743260383606, -0.005084422882646322, -0.019755113869905472, -0.056300703436136246, 0.05899298936128616, 0.0010421925690025091, 0.06107638031244278, ...
2d96685b-a409-4b62-8822-9a7450f9cda7
description: 'Allows processing files from HDFS in parallel from many nodes in a specified cluster.' sidebar_label: 'hdfsCluster' sidebar_position: 81 slug: /sql-reference/table-functions/hdfsCluster title: 'hdfsCluster' doc_type: 'reference' hdfsCluster Table Function Allows processing files from HDFS in parallel from many nodes in a specified cluster. On initiator it creates a connection to all nodes in the cluster, discloses asterisks in HDFS file path, and dispatches each file dynamically. On the worker node it asks the initiator about the next task to process and processes it. This is repeated until all tasks are finished. Syntax {#syntax} sql hdfsCluster(cluster_name, URI, format, structure) Arguments {#arguments} | Argument | Description | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | cluster_name | Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers. | | URI | URI to a file or a bunch of files. Supports following wildcards in readonly mode: * , ** , ? , {'abc','def'} and {N..M} where N , M β€” numbers, abc , def β€” strings. For more information see Wildcards In Path . | | format | The format of the file. | | structure | Structure of the table. Format 'column1_name column1_type, column2_name column2_type, ...' . | Returned value {#returned_value} A table with the specified structure for reading data in the specified file. Examples {#examples} Suppose that we have a ClickHouse cluster named cluster_simple , and several files with following URIs on HDFS: 'hdfs://hdfs1:9000/some_dir/some_file_1' 'hdfs://hdfs1:9000/some_dir/some_file_2' 'hdfs://hdfs1:9000/some_dir/some_file_3' 'hdfs://hdfs1:9000/another_dir/some_file_1' 'hdfs://hdfs1:9000/another_dir/some_file_2'
{"source_file": "hdfsCluster.md"}
[ -0.01898869499564171, -0.0651874840259552, -0.062499504536390305, 0.06934669613838196, -0.0036003084387630224, -0.05061771348118782, -0.02831520140171051, 0.02269860729575157, -0.04770014435052872, 0.023259557783603668, -0.04103253409266472, -0.01904298923909664, 0.04349504038691521, -0.06...
cd32bb55-f850-4957-b923-6527cd042423
'hdfs://hdfs1:9000/some_dir/some_file_2' 'hdfs://hdfs1:9000/some_dir/some_file_3' 'hdfs://hdfs1:9000/another_dir/some_file_1' 'hdfs://hdfs1:9000/another_dir/some_file_2' 'hdfs://hdfs1:9000/another_dir/some_file_3' Query the amount of rows in these files: sql SELECT count(*) FROM hdfsCluster('cluster_simple', 'hdfs://hdfs1:9000/{some,another}_dir/some_file_{1..3}', 'TSV', 'name String, value UInt32') Query the amount of rows in all files of these two directories: sql SELECT count(*) FROM hdfsCluster('cluster_simple', 'hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV', 'name String, value UInt32') :::note If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use ? . ::: Related {#related} HDFS engine HDFS table function
{"source_file": "hdfsCluster.md"}
[ 0.031158436089754105, -0.056739021092653275, -0.036703161895275116, 0.051172200590372086, 0.001750816940329969, 0.005465193651616573, 0.05300562083721161, 0.06565703451633453, -0.020395513623952866, -0.018545519560575485, 0.0586661696434021, -0.09658115357160568, 0.07954602688550949, -0.02...
df35a949-a0b3-4527-8897-2ae8fec86401
description: 'Used for test purposes as the fastest method to generate many rows. Similar to the system.zeros and system.zeros_mt system tables.' sidebar_label: 'zeros' sidebar_position: 145 slug: /sql-reference/table-functions/zeros title: 'zeros' doc_type: 'reference' zeros Table Function zeros(N) – Returns a table with the single 'zero' column (UInt8) that contains the integer 0 N times zeros_mt(N) – The same as zeros , but uses multiple threads. This function is used for test purposes as the fastest method to generate many rows. Similar to the system.zeros and system.zeros_mt system tables. The following queries are equivalent: sql SELECT * FROM zeros(10); SELECT * FROM system.zeros LIMIT 10; SELECT * FROM zeros_mt(10); SELECT * FROM system.zeros_mt LIMIT 10; response β”Œβ”€zero─┐ β”‚ 0 β”‚ β”‚ 0 β”‚ β”‚ 0 β”‚ β”‚ 0 β”‚ β”‚ 0 β”‚ β”‚ 0 β”‚ β”‚ 0 β”‚ β”‚ 0 β”‚ β”‚ 0 β”‚ β”‚ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”˜
{"source_file": "zeros.md"}
[ -0.0845351368188858, 0.022339176386594772, -0.11862407624721527, 0.019669929519295692, -0.047088608145713806, -0.11974701285362244, -0.009774450212717056, 0.01863638125360012, 0.03640279173851013, 0.04905564710497856, 0.06404855102300644, -0.0341356061398983, 0.06845305114984512, -0.093152...
bf29e697-ebc8-42d1-a6f1-ea11773b9241
description: 'creates a temporary storage which fills columns with values.' keywords: ['values', 'table function'] sidebar_label: 'values' sidebar_position: 210 slug: /sql-reference/table-functions/values title: 'values' doc_type: 'reference' Values Table Function {#values-table-function} The Values table function allows you to create temporary storage which fills columns with values. It is useful for quick testing or generating sample data. :::note Values is a case-insensitive function. I.e. VALUES or values are both valid. ::: Syntax {#syntax} The basic syntax of the VALUES table function is: sql VALUES([structure,] values...) It is commonly used as: sql VALUES( ['column1_name Type1, column2_name Type2, ...'], (value1_row1, value2_row1, ...), (value1_row2, value2_row2, ...), ... ) Arguments {#arguments} column1_name Type1, ... (optional). String specifying the column names and types. If this argument is omitted columns will be named as c1 , c2 , etc. (value1_row1, value2_row1) . Tuples containing values of any type. :::note Comma separated tuples can be replaced by single values as well. In this case each value is taken to be a new row. See the examples section for details. ::: Returned value {#returned-value} Returns a temporary table containing the provided values. Examples {#examples} sql title="Query" SELECT * FROM VALUES( 'person String, place String', ('Noah', 'Paris'), ('Emma', 'Tokyo'), ('Liam', 'Sydney'), ('Olivia', 'Berlin'), ('Ilya', 'London'), ('Sophia', 'London'), ('Jackson', 'Madrid'), ('Alexey', 'Amsterdam'), ('Mason', 'Venice'), ('Isabella', 'Prague') ) response title="Response" β”Œβ”€person───┬─place─────┐ 1. β”‚ Noah β”‚ Paris β”‚ 2. β”‚ Emma β”‚ Tokyo β”‚ 3. β”‚ Liam β”‚ Sydney β”‚ 4. β”‚ Olivia β”‚ Berlin β”‚ 5. β”‚ Ilya β”‚ London β”‚ 6. β”‚ Sophia β”‚ London β”‚ 7. β”‚ Jackson β”‚ Madrid β”‚ 8. β”‚ Alexey β”‚ Amsterdam β”‚ 9. β”‚ Mason β”‚ Venice β”‚ 10. β”‚ Isabella β”‚ Prague β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ VALUES can also be used with single values rather than tuples. For example: sql title="Query" SELECT * FROM VALUES( 'person String', 'Noah', 'Emma', 'Liam', 'Olivia', 'Ilya', 'Sophia', 'Jackson', 'Alexey', 'Mason', 'Isabella' ) response title="Response" β”Œβ”€person───┐ 1. β”‚ Noah β”‚ 2. β”‚ Emma β”‚ 3. β”‚ Liam β”‚ 4. β”‚ Olivia β”‚ 5. β”‚ Ilya β”‚ 6. β”‚ Sophia β”‚ 7. β”‚ Jackson β”‚ 8. β”‚ Alexey β”‚ 9. β”‚ Mason β”‚ 10. β”‚ Isabella β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Or without providing a row specification ( 'column1_name Type1, column2_name Type2, ...' in the syntax ), in which case the columns are automatically named. For example:
{"source_file": "values.md"}
[ -0.01577918417751789, 0.010484347119927406, -0.10121419280767441, 0.05024749040603638, -0.07056540995836258, 0.0033653834834694862, 0.08491100370883942, 0.0804188922047615, -0.026719558984041214, 0.06246238946914673, 0.0750705674290657, -0.015552887693047523, 0.09318169206380844, -0.057272...
65c5bb87-a021-4dc6-b6d3-7ae1035965ff
Or without providing a row specification ( 'column1_name Type1, column2_name Type2, ...' in the syntax ), in which case the columns are automatically named. For example: sql title="Query" -- tuples as values SELECT * FROM VALUES( ('Noah', 'Paris'), ('Emma', 'Tokyo'), ('Liam', 'Sydney'), ('Olivia', 'Berlin'), ('Ilya', 'London'), ('Sophia', 'London'), ('Jackson', 'Madrid'), ('Alexey', 'Amsterdam'), ('Mason', 'Venice'), ('Isabella', 'Prague') ) response title="Response" β”Œβ”€c1───────┬─c2────────┐ 1. β”‚ Noah β”‚ Paris β”‚ 2. β”‚ Emma β”‚ Tokyo β”‚ 3. β”‚ Liam β”‚ Sydney β”‚ 4. β”‚ Olivia β”‚ Berlin β”‚ 5. β”‚ Ilya β”‚ London β”‚ 6. β”‚ Sophia β”‚ London β”‚ 7. β”‚ Jackson β”‚ Madrid β”‚ 8. β”‚ Alexey β”‚ Amsterdam β”‚ 9. β”‚ Mason β”‚ Venice β”‚ 10. β”‚ Isabella β”‚ Prague β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ sql -- single values SELECT * FROM VALUES( 'Noah', 'Emma', 'Liam', 'Olivia', 'Ilya', 'Sophia', 'Jackson', 'Alexey', 'Mason', 'Isabella' ) response title="Response" β”Œβ”€c1───────┐ 1. β”‚ Noah β”‚ 2. β”‚ Emma β”‚ 3. β”‚ Liam β”‚ 4. β”‚ Olivia β”‚ 5. β”‚ Ilya β”‚ 6. β”‚ Sophia β”‚ 7. β”‚ Jackson β”‚ 8. β”‚ Alexey β”‚ 9. β”‚ Mason β”‚ 10. β”‚ Isabella β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ See also {#see-also} Values format
{"source_file": "values.md"}
[ 0.03993217274546623, -0.045250337570905685, -0.012205684557557106, 0.03985348343849182, -0.047716304659843445, -0.043347131460905075, 0.060377705842256546, -0.03587982431054115, -0.04115172475576401, 0.0014545508893206716, 0.028719710186123848, -0.0620182640850544, 0.023117875680327415, -0...
dc3f9c07-b57d-4679-9dcb-72d50762743a
description: 'Generates random data with a given schema. Allows populating test tables with that data. Not all types are supported.' sidebar_label: 'generateRandom' sidebar_position: 75 slug: /sql-reference/table-functions/generate title: 'generateRandom' doc_type: 'reference' generateRandom Table Function Generates random data with a given schema. Allows populating test tables with that data. Not all types are supported. Syntax {#syntax} sql generateRandom(['name TypeName[, name TypeName]...', [, 'random_seed'[, 'max_string_length'[, 'max_array_length']]]]) Arguments {#arguments} | Argument | Description | |---------------------|-------------------------------------------------------------------------------------------------| | name | Name of corresponding column. | | TypeName | Type of corresponding column. | | random_seed | Specify random seed manually to produce stable results. If NULL β€” seed is randomly generated. | | max_string_length | Maximum string length for all generated strings. Defaults to 10 . | | max_array_length | Maximum elements for all generated arrays or maps. Defaults to 10 . | Returned value {#returned_value} A table object with requested schema. Usage Example {#usage-example} sql SELECT * FROM generateRandom('a Array(Int8), d Decimal32(4), c Tuple(DateTime64(3), UUID)', 1, 10, 2) LIMIT 3; text β”Œβ”€a────────┬────────────d─┬─c──────────────────────────────────────────────────────────────────┐ β”‚ [77] β”‚ -124167.6723 β”‚ ('2061-04-17 21:59:44.573','3f72f405-ec3e-13c8-44ca-66ef335f7835') β”‚ β”‚ [32,110] β”‚ -141397.7312 β”‚ ('1979-02-09 03:43:48.526','982486d1-5a5d-a308-e525-7bd8b80ffa73') β”‚ β”‚ [68] β”‚ -67417.0770 β”‚ ('2080-03-12 14:17:31.269','110425e5-413f-10a6-05ba-fa6b3e929f15') β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ sql CREATE TABLE random (a Array(Int8), d Decimal32(4), c Tuple(DateTime64(3), UUID)) ENGINE=Memory; INSERT INTO random SELECT * FROM generateRandom() LIMIT 2; SELECT * FROM random; text β”Œβ”€a────────────────────────────┬────────────d─┬─c──────────────────────────────────────────────────────────────────┐ β”‚ [] β”‚ 68091.8197 β”‚ ('2037-10-02 12:44:23.368','039ecab7-81c2-45ee-208c-844e5c6c5652') β”‚ β”‚ [8,-83,0,-22,65,9,-30,28,64] β”‚ -186233.4909 β”‚ ('2062-01-11 00:06:04.124','69563ea1-5ad1-f870-16d8-67061da0df25') β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ In combination with generateRandomStructure : sql SELECT * FROM generateRandom(generateRandomStructure(4, 101), 101) LIMIT 3;
{"source_file": "generate.md"}
[ 0.01796828769147396, 0.02295740135014057, -0.06757520139217377, 0.040398091077804565, -0.050310105085372925, -0.03172079473733902, 0.07002638280391693, 0.012620791792869568, -0.07464119791984558, 0.023295700550079346, 0.03277202695608139, -0.07393835484981537, 0.08299539983272552, -0.08074...
e75f071b-a00a-46f7-a9ea-ad8b8952c180
In combination with generateRandomStructure : sql SELECT * FROM generateRandom(generateRandomStructure(4, 101), 101) LIMIT 3; text β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€c1─┬──────────────────c2─┬─c3─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─c4──────────────────────────────────────┐ β”‚ 1996-04-15 06:40:05 β”‚ 33954608387.2844801 β”‚ ['232.78.216.176','9.244.59.211','211.21.80.152','44.49.94.109','165.77.195.182','68.167.134.239','212.13.24.185','1.197.255.35','192.55.131.232'] β”‚ 45d9:2b52:ab6:1c59:185b:515:c5b6:b781 β”‚ β”‚ 2063-01-13 01:22:27 β”‚ 36155064970.9514454 β”‚ ['176.140.188.101'] β”‚ c65a:2626:41df:8dee:ec99:f68d:c6dd:6b30 β”‚ β”‚ 2090-02-28 14:50:56 β”‚ 3864327452.3901373 β”‚ ['155.114.30.32'] β”‚ 57e9:5229:93ab:fbf3:aae7:e0e4:d1eb:86b β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ With missing structure argument (in this case the structure is random): sql SELECT * FROM generateRandom() LIMIT 3; text β”Œβ”€β”€β”€c1─┬─────────c2─┬─────────────────────c3─┬──────────────────────c4─┬─c5───────┐ β”‚ -128 β”‚ 317300854 β”‚ 2030-08-16 08:22:20.65 β”‚ 1994-08-16 12:08:56.745 β”‚ R0qgiC46 β”‚ β”‚ 40 β”‚ -744906827 β”‚ 2059-04-16 06:31:36.98 β”‚ 1975-07-16 16:28:43.893 β”‚ PuH4M*MZ β”‚ β”‚ -55 β”‚ 698652232 β”‚ 2052-08-04 20:13:39.68 β”‚ 1998-09-20 03:48:29.279 β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ With random seed both for random structure and random data: sql SELECT * FROM generateRandom(11) LIMIT 3;
{"source_file": "generate.md"}
[ 0.03560108691453934, 0.009462954476475716, -0.058482248336076736, -0.0056351725943386555, -0.020938098430633545, -0.06717800348997116, 0.07135052978992462, -0.05646427348256111, -0.007894164882600307, 0.06808340549468994, -0.0022043937351554632, -0.09375287592411041, 0.04832286387681961, -...
5ba16d6e-d347-40eb-b139-24c0cc54ec2c
text β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€c1─┬─────────────────────────────────────────────────────────────────────────────c2─┬─────────────────────────────────────────────────────────────────────────────c3─┬─────────c4─┬─────────────────────────────────────────────────────────────────────────────c5─┬──────────────────────c6─┬─c7──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─c8──────────────────────────────────────┬─────────c9─┐ β”‚ -77422512305044606600216318673365695785 β”‚ 636812099959807642229.503817849012019401335326013846687285151335352272727523 β”‚ -34944452809785978175157829109276115789694605299387223845886143311647505037529 β”‚ 544473976 β”‚ 111220388331710079615337037674887514156741572807049614590010583571763691328563 β”‚ 22016.22623506465 β”‚ {'2052-01-31 20:25:33':4306400876908509081044405485378623663,'1993-04-16 15:58:49':164367354809499452887861212674772770279,'2101-08-19 03:07:18':-60676948945963385477105077735447194811,'2039-12-22 22:31:39':-59227773536703059515222628111999932330} β”‚ a7b2:8f58:4d07:6707:4189:80cf:92f5:902d β”‚ 1950-07-14 β”‚ β”‚ -159940486888657488786004075627859832441 β”‚ 629206527868163085099.8195700356331771569105231840157308480121506729741348442 β”‚ -53203761250367440823323469081755775164053964440214841464405368882783634063735 β”‚ 2187136525 β”‚ 94881662451116595672491944222189810087991610568040618106057495823910493624275 β”‚ 1.3095786748458954e-104 β”‚ {} β”‚ a051:e3da:2e0a:c69:7835:aed6:e8b:3817 β”‚ 1943-03-25 β”‚ β”‚ -5239084224358020595591895205940528518 β”‚ -529937657954363597180.1709207212648004850138812370209091520162977548101577846 β”‚ 47490343304582536176125359129223180987770215457970451211489086575421345731671 β”‚ 1637451978 β”‚ 101899445785010192893461828129714741298630410942962837910400961787305271699002 β”‚ 2.4344456058391296e223 β”‚ {'2013-12-22 17:42:43':80271108282641375975566414544777036006,'2041-03-08 10:28:17':169706054082247533128707458270535852845,'1986-08-31 23:07:38':-54371542820364299444195390357730624136,'2094-04-23 21:26:50':7944954483303909347454597499139023465} β”‚ 1293:a726:e899:9bfc:8c6f:2aa1:22c9:b635 β”‚ 1924-11-20 β”‚
{"source_file": "generate.md"}
[ 0.012594763189554214, -0.005912866909056902, -0.01981927827000618, -0.019227182492613792, -0.0033692317083477974, -0.04588698968291283, -0.017997730523347855, 0.012795434333384037, 0.049605146050453186, 0.022431261837482452, 0.058570411056280136, -0.01068704854696989, 0.018988430500030518, ...
c579a745-aec7-4502-92f5-cfb64860cf5a
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
{"source_file": "generate.md"}
[ -0.04159650206565857, -0.024681830778717995, 0.07789310067892075, -0.006149471737444401, 0.017001427710056305, 0.0012621216010302305, 0.12373971194028854, -0.001971512334421277, 0.05898108333349228, -0.04216562211513519, 0.059197306632995605, -0.07129892706871033, 0.12670376896858215, 0.02...
8dd5e4f8-d351-47d5-9f74-105f7b0f2430
:::note generateRandom(generateRandomStructure(), [random seed], max_string_length, max_array_length) with a large enough max_array_length can generate a really huge output due to possible big nesting depth (up to 16) of complex types ( Array , Tuple , Map , Nested ). ::: Related content {#related-content} Blog: Generating random data in ClickHouse
{"source_file": "generate.md"}
[ -0.04397840425372124, 0.003130580997094512, -0.0473274402320385, 0.020517561584711075, 0.022196251899003983, -0.037791069597005844, -0.017316168174147606, -0.09395114332437515, -0.0317719466984272, -0.0014284767676144838, 0.03865645453333855, -0.001102900831028819, 0.0921550989151001, -0.0...
7ee8a6d2-ffb7-403a-938a-1b3ec9b72478
description: 'Provides a read-only table-like interface to the Delta Lake tables in Amazon S3.' sidebar_label: 'deltaLake' sidebar_position: 45 slug: /sql-reference/table-functions/deltalake title: 'deltaLake' doc_type: 'reference' deltaLake Table Function Provides a read-only table-like interface to Delta Lake tables in Amazon S3, Azure Blob Storage, or a locally mounted file system. Syntax {#syntax} deltaLake is an alias of deltaLakeS3 , its supported for compatibility. ```sql deltaLake(url [,aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression]) deltaLakeS3(url [,aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression]) deltaLakeAzure(connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method]) deltaLakeLocal(path, [,format]) ``` Arguments {#arguments} Description of the arguments coincides with description of arguments in table functions s3 , azureBlobStorage , HDFS and file correspondingly. format stands for the format of data files in the Delta lake table. Returned value {#returned_value} A table with the specified structure for reading data in the specified Delta Lake table. Examples {#examples} Selecting rows from the table in S3 https://clickhouse-public-datasets.s3.amazonaws.com/delta_lake/hits/ : sql SELECT URL, UserAgent FROM deltaLake('https://clickhouse-public-datasets.s3.amazonaws.com/delta_lake/hits/') WHERE URL IS NOT NULL LIMIT 2 response β”Œβ”€URL───────────────────────────────────────────────────────────────────┬─UserAgent─┐ β”‚ http://auto.ria.ua/search/index.kz/jobinmoscow/detail/55089/hasimages β”‚ 1 β”‚ β”‚ http://auto.ria.ua/search/index.kz/jobinmoscow.ru/gosushi β”‚ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Virtual Columns {#virtual-columns} _path β€” Path to the file. Type: LowCardinality(String) . _file β€” Name of the file. Type: LowCardinality(String) . _size β€” Size of the file in bytes. Type: Nullable(UInt64) . If the file size is unknown, the value is NULL . _time β€” Last modified time of the file. Type: Nullable(DateTime) . If the time is unknown, the value is NULL . _etag β€” The etag of the file. Type: LowCardinality(String) . If the etag is unknown, the value is NULL . Related {#related} DeltaLake engine DeltaLake cluster table function
{"source_file": "deltalake.md"}
[ -0.03862440958619118, -0.016309883445501328, -0.08325008302927017, -0.02785583958029747, -0.020081447437405586, -0.017748326063156128, 0.01884043589234352, -0.00782245583832264, -0.01004562247544527, 0.04495839774608612, 0.01642436534166336, -0.03115645982325077, 0.10823221504688263, -0.08...
7a749869-500d-4a57-9fbf-afb332f5206a
description: 'Provides a table-like interface to SELECT and INSERT data from Google Cloud Storage. Requires the Storage Object User IAM role.' keywords: ['gcs', 'bucket'] sidebar_label: 'gcs' sidebar_position: 70 slug: /sql-reference/table-functions/gcs title: 'gcs' doc_type: 'reference' gcs Table Function Provides a table-like interface to SELECT and INSERT data from Google Cloud Storage . Requires the Storage Object User IAM role . This is an alias of the s3 table function . If you have multiple replicas in your cluster, you can use the s3Cluster function (which works with GCS) instead to parallelize inserts. Syntax {#syntax} sql gcs(url [, NOSIGN | hmac_key, hmac_secret] [,format] [,structure] [,compression_method]) gcs(named_collection[, option=value [,..]]) :::tip GCS The GCS Table Function integrates with Google Cloud Storage by using the GCS XML API and HMAC keys. See the Google interoperability docs for more details about the endpoint and HMAC. ::: Arguments {#arguments} | Argument | Description | |------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | url | Bucket path to file. Supports following wildcards in readonly mode: * , ** , ? , {abc,def} and {N..M} where N , M β€” numbers, 'abc' , 'def' β€” strings. | | NOSIGN | If this keyword is provided in place of credentials, all the requests will not be signed. | | hmac_key and hmac_secret | Keys that specify credentials to use with given endpoint. Optional. | | format | The format of the file. | | structure | Structure of the table. Format 'column1_name column1_type, column2_name column2_type, ...' . | | compression_method | Parameter is optional. Supported values: none , gzip or gz , brotli or br , xz or LZMA , zstd or zst . By default, it will autodetect compression method by file extension. | :::note GCS The GCS path is in this format as the endpoint for the Google XML API is different than the JSON API: text https://storage.googleapis.com/<bucket>/<folder>/<filename(s)>
{"source_file": "gcs.md"}
[ -0.07099271565675735, -0.10989756882190704, -0.05336460843682289, -0.007182334084063768, -0.010847803205251694, -0.007971464656293392, 0.008190223947167397, -0.05502517893910408, 0.010364591144025326, 0.05539701133966446, 0.04824502766132355, -0.04981907829642296, 0.11576139181852341, -0.0...
404e77b3-9c2b-418f-9452-0497108ee300
:::note GCS The GCS path is in this format as the endpoint for the Google XML API is different than the JSON API: text https://storage.googleapis.com/<bucket>/<folder>/<filename(s)> and not ~~https://storage.cloud.google.com~~. ::: Arguments can also be passed using named collections . In this case url , format , structure , compression_method work in the same way, and some extra parameters are supported: | Parameter | Description | |-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | access_key_id | hmac_key , optional. | | secret_access_key | hmac_secret , optional. | | filename | Appended to the url if specified. | | use_environment_credentials | Enabled by default, allows passing extra parameters using environment variables AWS_CONTAINER_CREDENTIALS_RELATIVE_URI , AWS_CONTAINER_CREDENTIALS_FULL_URI , AWS_CONTAINER_AUTHORIZATION_TOKEN , AWS_EC2_METADATA_DISABLED . | | no_sign_request | Disabled by default. | | expiration_window_seconds | Default value is 120. | Returned value {#returned_value} A table with the specified structure for reading or writing data in the specified file. Examples {#examples} Selecting the first two rows from the table from GCS file https://storage.googleapis.com/my-test-bucket-768/data.csv : sql SELECT * FROM gcs('https://storage.googleapis.com/clickhouse_public_datasets/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 2;
{"source_file": "gcs.md"}
[ -0.10815610736608505, 0.06279028952121735, -0.004833877086639404, -0.008898594416677952, -0.025645067915320396, -0.030973907560110092, -0.03808381035923958, -0.0060768136754632, 0.051533956080675125, 0.028856799006462097, 0.009289856068789959, -0.02976503036916256, 0.04619845002889633, -0....
626c3440-f3b3-422a-a8aa-68fa0b43941f
sql SELECT * FROM gcs('https://storage.googleapis.com/clickhouse_public_datasets/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 2; text β”Œβ”€column1─┬─column2─┬─column3─┐ β”‚ 1 β”‚ 2 β”‚ 3 β”‚ β”‚ 3 β”‚ 2 β”‚ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The similar but from file with gzip compression method: sql SELECT * FROM gcs('https://storage.googleapis.com/clickhouse_public_datasets/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32', 'gzip') LIMIT 2; text β”Œβ”€column1─┬─column2─┬─column3─┐ β”‚ 1 β”‚ 2 β”‚ 3 β”‚ β”‚ 3 β”‚ 2 β”‚ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Usage {#usage} Suppose that we have several files with following URIs on GCS: 'https://storage.googleapis.com/my-test-bucket-768/some_prefix/some_file_1.csv' 'https://storage.googleapis.com/my-test-bucket-768/some_prefix/some_file_2.csv' 'https://storage.googleapis.com/my-test-bucket-768/some_prefix/some_file_3.csv' 'https://storage.googleapis.com/my-test-bucket-768/some_prefix/some_file_4.csv' 'https://storage.googleapis.com/my-test-bucket-768/another_prefix/some_file_1.csv' 'https://storage.googleapis.com/my-test-bucket-768/another_prefix/some_file_2.csv' 'https://storage.googleapis.com/my-test-bucket-768/another_prefix/some_file_3.csv' 'https://storage.googleapis.com/my-test-bucket-768/another_prefix/some_file_4.csv' Count the amount of rows in files ending with numbers from 1 to 3: sql SELECT count(*) FROM gcs('https://storage.googleapis.com/clickhouse_public_datasets/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') text β”Œβ”€count()─┐ β”‚ 18 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Count the total amount of rows in all files in these two directories: sql SELECT count(*) FROM gcs('https://storage.googleapis.com/clickhouse_public_datasets/my-test-bucket-768/{some,another}_prefix/*', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') text β”Œβ”€count()─┐ β”‚ 24 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ :::warning If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use ? . ::: Count the total amount of rows in files named file-000.csv , file-001.csv , ... , file-999.csv : sql SELECT count(*) FROM gcs('https://storage.googleapis.com/clickhouse_public_datasets/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV', 'name String, value UInt32'); text β”Œβ”€count()─┐ β”‚ 12 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Insert data into file test-data.csv.gz : sql INSERT INTO FUNCTION gcs('https://storage.googleapis.com/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') VALUES ('test-data', 1), ('test-data-2', 2); Insert data into file test-data.csv.gz from existing table:
{"source_file": "gcs.md"}
[ -0.09278754144906998, 0.03153903782367706, -0.0590943843126297, 0.04524048790335655, 0.0168253593146801, -0.10990726947784424, 0.0500897541642189, -0.050501398742198944, 0.014315624721348286, 0.06947798281908035, 0.010433819144964218, -0.02951432392001152, 0.06249329820275307, -0.052129477...
d71fb09e-d92c-45ad-8508-659375f9d6d0
Insert data into file test-data.csv.gz from existing table: sql INSERT INTO FUNCTION gcs('https://storage.googleapis.com/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') SELECT name, value FROM existing_table; Glob ** can be used for recursive directory traversal. Consider the below example, it will fetch all files from my-test-bucket-768 directory recursively: sql SELECT * FROM gcs('https://storage.googleapis.com/my-test-bucket-768/**', 'CSV', 'name String, value UInt32', 'gzip'); The below get data from all test-data.csv.gz files from any folder inside my-test-bucket directory recursively: sql SELECT * FROM gcs('https://storage.googleapis.com/my-test-bucket-768/**/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip'); For production use cases it is recommended to use named collections . Here is the example: ```sql CREATE NAMED COLLECTION creds AS access_key_id = ' ', secret_access_key = ' '; SELECT count(*) FROM gcs(creds, url='https://s3-object-url.csv') ``` Partitioned Write {#partitioned-write} If you specify PARTITION BY expression when inserting data into GCS table, a separate file is created for each partition value. Splitting the data into separate files helps to improve reading operations efficiency. Examples Using partition ID in a key creates separate files: sql INSERT INTO TABLE FUNCTION gcs('http://bucket.amazonaws.com/my_bucket/file_{_partition_id}.csv', 'CSV', 'a String, b UInt32, c UInt32') PARTITION BY a VALUES ('x', 2, 3), ('x', 4, 5), ('y', 11, 12), ('y', 13, 14), ('z', 21, 22), ('z', 23, 24); As a result, the data is written into three files: file_x.csv , file_y.csv , and file_z.csv . Using partition ID in a bucket name creates files in different buckets: sql INSERT INTO TABLE FUNCTION gcs('http://bucket.amazonaws.com/my_bucket_{_partition_id}/file.csv', 'CSV', 'a UInt32, b UInt32, c UInt32') PARTITION BY a VALUES (1, 2, 3), (1, 4, 5), (10, 11, 12), (10, 13, 14), (20, 21, 22), (20, 23, 24); As a result, the data is written into three files in different buckets: my_bucket_1/file.csv , my_bucket_10/file.csv , and my_bucket_20/file.csv . Related {#related} S3 table function S3 engine
{"source_file": "gcs.md"}
[ -0.03453502058982849, -0.03160443529486656, -0.06264597177505493, 0.058518342673778534, -0.039571452885866165, -0.105046346783638, 0.05215577036142349, 0.030645085498690605, -0.009225426241755486, 0.07876023650169373, 0.04701933637261391, -0.049470022320747375, 0.09269583970308304, -0.0445...
0bf8d06f-76b8-4746-8edd-5414f2f45997
description: 'Provides a read-only table-like interface to Apache Paimon tables in Amazon S3, Azure, HDFS or locally stored.' sidebar_label: 'paimon' sidebar_position: 90 slug: /sql-reference/table-functions/paimon title: 'paimon' doc_type: 'reference' paimon Table Function {#paimon-table-function} Provides a read-only table-like interface to Apache Paimon tables in Amazon S3, Azure, HDFS or locally stored. Syntax {#syntax} ```sql paimon(url [,access_key_id, secret_access_key] [,format] [,structure] [,compression]) paimonS3(url [,access_key_id, secret_access_key] [,format] [,structure] [,compression]) paimonAzure(connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method]) paimonHDFS(path_to_table, [,format] [,compression_method]) paimonLocal(path_to_table, [,format] [,compression_method]) ``` Arguments {#arguments} Description of the arguments coincides with description of arguments in table functions s3 , azureBlobStorage , HDFS and file correspondingly. format stands for the format of data files in the Paimon table. Returned value {#returned-value} A table with the specified structure for reading data in the specified Paimon table. Defining a named collection {#defining-a-named-collection} Here is an example of configuring a named collection for storing the URL and credentials: xml <clickhouse> <named_collections> <paimon_conf> <url>http://test.s3.amazonaws.com/clickhouse-bucket/</url> <access_key_id>test<access_key_id> <secret_access_key>test</secret_access_key> <format>auto</format> <structure>auto</structure> </paimon_conf> </named_collections> </clickhouse> sql SELECT * FROM paimonS3(paimon_conf, filename = 'test_table') DESCRIBE paimonS3(paimon_conf, filename = 'test_table') Aliases {#aliases} Table function paimon is an alias to paimonS3 now. Virtual Columns {#virtual-columns} _path β€” Path to the file. Type: LowCardinality(String) . _file β€” Name of the file. Type: LowCardinality(String) . _size β€” Size of the file in bytes. Type: Nullable(UInt64) . If the file size is unknown, the value is NULL . _time β€” Last modified time of the file. Type: Nullable(DateTime) . If the time is unknown, the value is NULL . _etag β€” The etag of the file. Type: LowCardinality(String) . If the etag is unknown, the value is NULL . Data Types supported {#data-types-supported}
{"source_file": "paimon.md"}
[ -0.016577916219830513, -0.03715655952692032, -0.09358440339565277, -0.028444653376936913, -0.05069594085216522, 0.02116895280778408, -0.007042743731290102, -0.030071338638663292, -0.029971200972795486, 0.07013168931007385, 0.057811345905065536, -0.042082302272319794, 0.05743289366364479, -...
fc81a3d6-54ee-4527-81fb-203fc0aa8b28
_etag β€” The etag of the file. Type: LowCardinality(String) . If the etag is unknown, the value is NULL . Data Types supported {#data-types-supported} | Paimon Data Type | Clickhouse Data Type |-------|--------| |BOOLEAN |Int8 | |TINYINT |Int8 | |SMALLINT |Int16 | |INTEGER |Int32 | |BIGINT |Int64 | |FLOAT |Float32 | |DOUBLE |Float64 | |STRING,VARCHAR,BYTES,VARBINARY |String | |DATE |Date | |TIME(p),TIME |Time('UTC') | |TIMESTAMP(p) WITH LOCAL TIME ZONE |DateTime64 | |TIMESTAMP(p) |DateTime64('UTC') | |CHAR |FixedString(1) | |BINARY(n) |FixedString(n) | |DECIMAL(P,S) |Decimal(P,S) | |ARRAY |Array | |MAP |Map | Partition supported {#partition-supported} Data types supported in Paimon partition keys: * CHAR * VARCHAR * BOOLEAN * DECIMAL * TINYINT * SMALLINT * INTEGER * DATE * TIME * TIMESTAMP * TIMESTAMP WITH LOCAL TIME ZONE * BIGINT * FLOAT * DOUBLE See Also {#see-also} Paimon cluster table function
{"source_file": "paimon.md"}
[ -0.010866773314774036, 0.011979728937149048, -0.011521156877279282, -0.002692799549549818, -0.022120226174592972, -0.029227333143353462, -0.010614313185214996, 0.05301329866051674, -0.009538715705275536, -0.003666680073365569, 0.0519375205039978, -0.06244392693042755, -0.02808733470737934, ...
afd87427-8e15-435f-ae79-c5673bcb9b20
description: 'Documentation for Table Functions' sidebar_label: 'Table Functions' sidebar_position: 1 slug: /sql-reference/table-functions/ title: 'Table Functions' doc_type: 'reference' Table Functions Table functions are methods for constructing tables. Usage {#usage} Table functions can be used in the FROM clause of a SELECT query. For example, you can SELECT data from a file on your local machine using the file table function. bash echo "1, 2, 3" > example.csv text ./clickhouse client :) SELECT * FROM file('example.csv') β”Œβ”€c1─┬─c2─┬─c3─┐ β”‚ 1 β”‚ 2 β”‚ 3 β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”˜ You can also use table functions for creating a temporary table that is available only in the current query. For example: sql title="Query" SELECT * FROM generateSeries(1,5); response title="Response" β”Œβ”€generate_series─┐ β”‚ 1 β”‚ β”‚ 2 β”‚ β”‚ 3 β”‚ β”‚ 4 β”‚ β”‚ 5 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The table is deleted when the query finishes. Table functions can be used as a way to create tables, using the following syntax: sql CREATE TABLE [IF NOT EXISTS] [db.]table_name AS table_function() For example: sql title="Query" CREATE TABLE series AS generateSeries(1, 5); SELECT * FROM series; response β”Œβ”€generate_series─┐ β”‚ 1 β”‚ β”‚ 2 β”‚ β”‚ 3 β”‚ β”‚ 4 β”‚ β”‚ 5 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Finally, table functions can be used to INSERT data into a table. For example, we could write out the contents of the table we created in the previous example to a file on disk using the file table function again: sql INSERT INTO FUNCTION file('numbers.csv', 'CSV') SELECT * FROM series; bash cat numbers.csv 1 2 3 4 5 :::note You can't use table functions if the allow_ddl setting is disabled. :::
{"source_file": "index.md"}
[ -0.04354554042220116, 0.00040608076960779727, -0.08741261810064316, 0.07514346390962601, -0.03838012367486954, -0.0311025008559227, 0.011901285499334335, 0.055114906281232834, 0.0037224176339805126, 0.055694229900836945, 0.015428178012371063, -0.014096392318606377, 0.08913572877645493, -0....
96b70a4e-d5de-4181-95bc-d0d48a00979f
description: 'Represents the contents of index and marks files of MergeTree tables. It can be used for introspection.' sidebar_label: 'mergeTreeIndex' sidebar_position: 77 slug: /sql-reference/table-functions/mergeTreeIndex title: 'mergeTreeIndex' doc_type: 'reference' mergeTreeIndex Table Function Represents the contents of index and marks files of MergeTree tables. It can be used for introspection. Syntax {#syntax} sql mergeTreeIndex(database, table [, with_marks = true] [, with_minmax = true]) Arguments {#arguments} | Argument | Description | |---------------|---------------------------------------------------| | database | The database name to read index and marks from. | | table | The table name to read index and marks from. | | with_marks | Whether include columns with marks to the result. | | with_minmax | Whether include min-max index to the result. | Returned value {#returned_value} A table object with columns with values of primary index and min-max index (if enabled) of source table, columns with values of marks (if enabled) for all possible files in data parts of source table and virtual columns: part_name - The name of data part. mark_number - The number of current mark in data part. rows_in_granule - The number of rows in current granule. Marks column may contain (NULL, NULL) value in case when column is absent in data part or marks for one of its substreams are not written (e.g. in compact parts). Usage Example {#usage-example} ``sql CREATE TABLE test_table ( id UInt64, n UInt64, arr` Array(UInt64) ) ENGINE = MergeTree ORDER BY id SETTINGS index_granularity = 3, min_bytes_for_wide_part = 0, min_rows_for_wide_part = 8; INSERT INTO test_table SELECT number, number, range(number % 5) FROM numbers(5); INSERT INTO test_table SELECT number, number, range(number % 5) FROM numbers(10, 10); ``` sql SELECT * FROM mergeTreeIndex(currentDatabase(), test_table, with_marks = true);
{"source_file": "mergeTreeIndex.md"}
[ 0.0638343021273613, 0.013201471418142319, -0.023099126294255257, 0.06032360717654228, -0.004196310881525278, -0.0014117737300693989, 0.06146860122680664, 0.09546219557523727, -0.06708204746246338, 0.008665204979479313, 0.02181699313223362, -0.006699533201754093, 0.055108774453401566, -0.09...
f2b41180-79da-4434-90ac-c2fa6f275d30
INSERT INTO test_table SELECT number, number, range(number % 5) FROM numbers(10, 10); ``` sql SELECT * FROM mergeTreeIndex(currentDatabase(), test_table, with_marks = true); text β”Œβ”€part_name─┬─mark_number─┬─rows_in_granule─┬─id─┬─id.mark─┬─n.mark──┬─arr.size0.mark─┬─arr.mark─┐ β”‚ all_1_1_0 β”‚ 0 β”‚ 3 β”‚ 0 β”‚ (0,0) β”‚ (42,0) β”‚ (NULL,NULL) β”‚ (84,0) β”‚ β”‚ all_1_1_0 β”‚ 1 β”‚ 2 β”‚ 3 β”‚ (133,0) β”‚ (172,0) β”‚ (NULL,NULL) β”‚ (211,0) β”‚ β”‚ all_1_1_0 β”‚ 2 β”‚ 0 β”‚ 4 β”‚ (271,0) β”‚ (271,0) β”‚ (NULL,NULL) β”‚ (271,0) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€part_name─┬─mark_number─┬─rows_in_granule─┬─id─┬─id.mark─┬─n.mark─┬─arr.size0.mark─┬─arr.mark─┐ β”‚ all_2_2_0 β”‚ 0 β”‚ 3 β”‚ 10 β”‚ (0,0) β”‚ (0,0) β”‚ (0,0) β”‚ (0,0) β”‚ β”‚ all_2_2_0 β”‚ 1 β”‚ 3 β”‚ 13 β”‚ (0,24) β”‚ (0,24) β”‚ (0,24) β”‚ (0,24) β”‚ β”‚ all_2_2_0 β”‚ 2 β”‚ 3 β”‚ 16 β”‚ (0,48) β”‚ (0,48) β”‚ (0,48) β”‚ (0,80) β”‚ β”‚ all_2_2_0 β”‚ 3 β”‚ 1 β”‚ 19 β”‚ (0,72) β”‚ (0,72) β”‚ (0,72) β”‚ (0,128) β”‚ β”‚ all_2_2_0 β”‚ 4 β”‚ 0 β”‚ 19 β”‚ (0,80) β”‚ (0,80) β”‚ (0,80) β”‚ (0,160) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ sql DESCRIBE mergeTreeIndex(currentDatabase(), test_table, with_marks = true) SETTINGS describe_compact_output = 1; text β”Œβ”€name────────────┬─type─────────────────────────────────────────────────────────────────────────────────────────────┐ β”‚ part_name β”‚ String β”‚ β”‚ mark_number β”‚ UInt64 β”‚ β”‚ rows_in_granule β”‚ UInt64 β”‚ β”‚ id β”‚ UInt64 β”‚ β”‚ id.mark β”‚ Tuple(offset_in_compressed_file Nullable(UInt64), offset_in_decompressed_block Nullable(UInt64)) β”‚ β”‚ n.mark β”‚ Tuple(offset_in_compressed_file Nullable(UInt64), offset_in_decompressed_block Nullable(UInt64)) β”‚ β”‚ arr.size0.mark β”‚ Tuple(offset_in_compressed_file Nullable(UInt64), offset_in_decompressed_block Nullable(UInt64)) β”‚ β”‚ arr.mark β”‚ Tuple(offset_in_compressed_file Nullable(UInt64), offset_in_decompressed_block Nullable(UInt64)) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
{"source_file": "mergeTreeIndex.md"}
[ 0.09460369497537613, -0.037243783473968506, 0.03662462532520294, -0.021677395328879356, -0.020320508629083633, -0.012868126854300499, 0.05725186690688133, 0.00909410510212183, -0.10658953338861465, 0.06605548411607742, 0.05785574018955231, -0.03690408170223236, 0.029372666031122208, -0.064...
567c1168-c0e2-4531-a3e8-3544368ebd23
description: 'Allows SELECT and INSERT queries to be performed on data that is stored on a remote PostgreSQL server.' sidebar_label: 'postgresql' sidebar_position: 160 slug: /sql-reference/table-functions/postgresql title: 'postgresql' doc_type: 'reference' postgresql Table Function Allows SELECT and INSERT queries to be performed on data that is stored on a remote PostgreSQL server. Syntax {#syntax} sql postgresql({host:port, database, table, user, password[, schema, [, on_conflict]] | named_collection[, option=value [,..]]}) Arguments {#arguments} | Argument | Description | |---------------|----------------------------------------------------------------------------| | host:port | PostgreSQL server address. | | database | Remote database name. | | table | Remote table name. | | user | PostgreSQL user. | | password | User password. | | schema | Non-default table schema. Optional. | | on_conflict | Conflict resolution strategy. Example: ON CONFLICT DO NOTHING . Optional. | Arguments also can be passed using named collections . In this case host and port should be specified separately. This approach is recommended for production environment. Returned value {#returned_value} A table object with the same columns as the original PostgreSQL table. :::note In the INSERT query to distinguish table function postgresql(...) from table name with column names list you must use keywords FUNCTION or TABLE FUNCTION . See examples below. ::: Implementation Details {#implementation-details} SELECT queries on PostgreSQL side run as COPY (SELECT ...) TO STDOUT inside read-only PostgreSQL transaction with commit after each SELECT query. Simple WHERE clauses such as = , != , > , >= , < , <= , and IN are executed on the PostgreSQL server. All joins, aggregations, sorting, IN [ array ] conditions and the LIMIT sampling constraint are executed in ClickHouse only after the query to PostgreSQL finishes. INSERT queries on PostgreSQL side run as COPY "table_name" (field1, field2, ... fieldN) FROM STDIN inside PostgreSQL transaction with auto-commit after each INSERT statement. PostgreSQL Array types converts into ClickHouse arrays. :::note Be careful, in PostgreSQL an array data type column like Integer[] may contain arrays of different dimensions in different rows, but in ClickHouse it is only allowed to have multidimensional arrays of the same dimension in all rows. ::: Supports multiple replicas that must be listed by | . For example:
{"source_file": "postgresql.md"}
[ 0.007274061441421509, 0.023423470556735992, -0.06572927534580231, 0.046158164739608765, -0.1225670725107193, -0.011828129179775715, 0.029215263202786446, 0.032594721764326096, 0.012892396189272404, 0.014774140901863575, -0.00964734610170126, -0.013482660986483097, 0.015585771761834621, -0....
9df19ece-add8-4b68-bd35-0ed01f8d34f2
Supports multiple replicas that must be listed by | . For example: sql SELECT name FROM postgresql(`postgres{1|2|3}:5432`, 'postgres_database', 'postgres_table', 'user', 'password'); or sql SELECT name FROM postgresql(`postgres1:5431|postgres2:5432`, 'postgres_database', 'postgres_table', 'user', 'password'); Supports replicas priority for PostgreSQL dictionary source. The bigger the number in map, the less the priority. The highest priority is 0 . Examples {#examples} Table in PostgreSQL: ```text postgres=# CREATE TABLE "public"."test" ( "int_id" SERIAL, "int_nullable" INT NULL DEFAULT NULL, "float" FLOAT NOT NULL, "str" VARCHAR(100) NOT NULL DEFAULT '', "float_nullable" FLOAT NULL DEFAULT NULL, PRIMARY KEY (int_id)); CREATE TABLE postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> SELECT * FROM test; int_id | int_nullable | float | str | float_nullable --------+--------------+-------+------+---------------- 1 | | 2 | test | (1 row) ``` Selecting data from ClickHouse using plain arguments: sql SELECT * FROM postgresql('localhost:5432', 'test', 'test', 'postgresql_user', 'password') WHERE str IN ('test'); Or using named collections : sql CREATE NAMED COLLECTION mypg AS host = 'localhost', port = 5432, database = 'test', user = 'postgresql_user', password = 'password'; SELECT * FROM postgresql(mypg, table='test') WHERE str IN ('test'); text β”Œβ”€int_id─┬─int_nullable─┬─float─┬─str──┬─float_nullable─┐ β”‚ 1 β”‚ ᴺᡁᴸᴸ β”‚ 2 β”‚ test β”‚ ᴺᡁᴸᴸ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Inserting: sql INSERT INTO TABLE FUNCTION postgresql('localhost:5432', 'test', 'test', 'postgrsql_user', 'password') (int_id, float) VALUES (2, 3); SELECT * FROM postgresql('localhost:5432', 'test', 'test', 'postgresql_user', 'password'); text β”Œβ”€int_id─┬─int_nullable─┬─float─┬─str──┬─float_nullable─┐ β”‚ 1 β”‚ ᴺᡁᴸᴸ β”‚ 2 β”‚ test β”‚ ᴺᡁᴸᴸ β”‚ β”‚ 2 β”‚ ᴺᡁᴸᴸ β”‚ 3 β”‚ β”‚ ᴺᡁᴸᴸ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Using Non-default Schema: ```text postgres=# CREATE SCHEMA "nice.schema"; postgres=# CREATE TABLE "nice.schema"."nice.table" (a integer); postgres=# INSERT INTO "nice.schema"."nice.table" SELECT i FROM generate_series(0, 99) as t(i) ``` sql CREATE TABLE pg_table_schema_with_dots (a UInt32) ENGINE PostgreSQL('localhost:5432', 'clickhouse', 'nice.table', 'postgrsql_user', 'password', 'nice.schema'); Related {#related} The PostgreSQL table engine Using PostgreSQL as a dictionary source Replicating or migrating Postgres data with with PeerDB {#replicating-or-migrating-postgres-data-with-with-peerdb}
{"source_file": "postgresql.md"}
[ -0.002112702000886202, -0.043879732489585876, -0.07515139132738113, -0.0672297477722168, -0.09694914519786835, -0.030019039288163185, 0.00017004272376652807, -0.010647867806255817, -0.0250515379011631, 0.029735727235674858, 0.017575137317180634, 0.04301287606358528, 0.03391149640083313, -0...
46ed1d24-37a9-402f-8691-11b74601ba31
The PostgreSQL table engine Using PostgreSQL as a dictionary source Replicating or migrating Postgres data with with PeerDB {#replicating-or-migrating-postgres-data-with-with-peerdb} In addition to table functions, you can always use PeerDB by ClickHouse to set up a continuous data pipeline from Postgres to ClickHouse. PeerDB is a tool designed specifically to replicate data from Postgres to ClickHouse using change data capture (CDC).
{"source_file": "postgresql.md"}
[ -0.04882749915122986, -0.06472461670637131, -0.07180555909872055, -0.006173542235046625, -0.08201754838228226, -0.03755742311477661, -0.024153733626008034, -0.050199273973703384, -0.05937158316373825, 0.055820778012275696, -0.003948505502194166, 0.053187817335128784, 0.008124076761305332, ...
9a79f48e-17cc-431f-b231-52eb051fc725
description: 'timeSeriesData returns the data table used by table db_name.time_series_table whose table engine is TimeSeries.' sidebar_label: 'timeSeriesData' sidebar_position: 145 slug: /sql-reference/table-functions/timeSeriesData title: 'timeSeriesData' doc_type: 'reference' timeSeriesData Table Function timeSeriesData(db_name.time_series_table) - Returns the data table used by table db_name.time_series_table whose table engine is TimeSeries : sql CREATE TABLE db_name.time_series_table ENGINE=TimeSeries DATA data_table The function also works if the data table is inner: sql CREATE TABLE db_name.time_series_table ENGINE=TimeSeries DATA INNER UUID '01234567-89ab-cdef-0123-456789abcdef' The following queries are equivalent: sql SELECT * FROM timeSeriesData(db_name.time_series_table); SELECT * FROM timeSeriesData('db_name.time_series_table'); SELECT * FROM timeSeriesData('db_name', 'time_series_table');
{"source_file": "timeSeriesData.md"}
[ -0.034002650529146194, -0.041977133601903915, -0.05150340497493744, -0.007475280202925205, -0.06518638879060745, -0.09204256534576416, 0.04988039284944534, 0.08370408415794373, 0.0062776366248726845, -0.03716328367590904, 0.006358925253152847, -0.0797187015414238, 0.0023347935639321804, -0...
92d13306-73f0-4809-b8c8-b01957fb9e12
description: 'Provides a table-like interface to select/insert files in Azure Blob Storage. Similar to the s3 function.' keywords: ['azure blob storage'] sidebar_label: 'azureBlobStorage' sidebar_position: 10 slug: /sql-reference/table-functions/azureBlobStorage title: 'azureBlobStorage' doc_type: 'reference' import ExperimentalBadge from '@theme/badges/ExperimentalBadge'; import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge'; azureBlobStorage Table Function Provides a table-like interface to select/insert files in Azure Blob Storage . This table function is similar to the s3 function . Syntax {#syntax} sql azureBlobStorage(- connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression, structure, partition_strategy, partition_columns_in_data_file, extra_credentials(client_id=, tenant_id=)]) Arguments {#arguments}
{"source_file": "azureBlobStorage.md"}
[ 0.01154843345284462, -0.046559955924749374, -0.11976932734251022, 0.0787351056933403, -0.022957902401685715, 0.055872734636068344, 0.06597360223531723, -0.0033267394173890352, 0.0244993194937706, 0.10767493396997452, 0.02172393538057804, 0.009066970087587833, 0.13159339129924774, 0.0126971...
c48b508b-72c5-4a4a-b653-adb8d07be6f4
| Argument | Description | |---------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | connection_string \| storage_account_url | connection_string includes account name & key ( Create connection string ) or you could also provide the storage account url here and account name & account key as separate parameters (see parameters account_name & account_key) | | container_name | Container name | | blobpath | file path. Supports following wildcards in readonly mode: * , ** , ? , {abc,def} and {N..M} where N , M β€” numbers, 'abc' , 'def' β€” strings. | | account_name | if storage_account_url is used, then account name can be specified here | | account_key
{"source_file": "azureBlobStorage.md"}
[ 0.027820559218525887, 0.09321624785661697, -0.03668836131691933, -0.034107331186532974, -0.05860394611954689, 0.01916765607893467, 0.03267664462327957, 0.04797063022851944, 0.04604816436767578, -0.05763087421655655, 0.011640780605375767, -0.04407937824726105, 0.0003477597492747009, -0.0341...
971d2535-fe59-457d-beda-2770b0bb84df
| account_key | if storage_account_url is used, then account key can be specified here | | format | The format of the file. | | compression | Supported values: none , gzip/gz , brotli/br , xz/LZMA , zstd/zst . By default, it will autodetect compression by file extension. (same as setting to auto ). | | structure | Structure of the table. Format 'column1_name column1_type, column2_name column2_type, ...' . | | partition_strategy | Parameter is optional. Supported values: WILDCARD or HIVE . WILDCARD requires a {_partition_id} in the path, which is replaced with the partition key. HIVE does not allow wildcards, assumes the path is the table root, and generates Hive-style partitioned directories with Snowflake IDs as filenames and the file format as the extension. Defaults to WILDCARD | | partition_columns_in_data_file | Parameter is optional. Only used with HIVE partition strategy. Tells ClickHouse whether to expect partition columns to be written in the data file. Defaults false
{"source_file": "azureBlobStorage.md"}
[ 0.0036375741474330425, 0.03577835485339165, -0.07121344655752182, -0.0029543121345341206, 0.024327201768755913, -0.0032272222451865673, 0.045984119176864624, 0.06727232784032822, -0.08232422918081284, 0.023077569901943207, -0.05282527580857277, -0.010364746674895287, 0.04255812615156174, -...
5189a700-a9c6-4bcc-9614-f12eb48450ac
| Parameter is optional. Only used with HIVE partition strategy. Tells ClickHouse whether to expect partition columns to be written in the data file. Defaults false . | | extra_credentials | Use client_id and tenant_id for authentication. If extra_credentials are provided, they are given priority over account_name and account_key .
{"source_file": "azureBlobStorage.md"}
[ 0.03761804476380348, -0.01712910085916519, -0.09348780661821365, -0.04400063678622246, -0.023963037878274918, 0.01585768721997738, 0.04758989438414574, 0.0035538123920559883, -0.08072663098573685, -0.008187140338122845, 0.06656084954738617, -0.03649608790874481, 0.050171997398138046, -0.04...
a367bb8b-3f82-4eab-9fbd-0bb52696d967
Returned value {#returned_value} A table with the specified structure for reading or writing data in the specified file. Examples {#examples} Similar to the AzureBlobStorage table engine, users can use Azurite emulator for local Azure Storage development. Further details here . Below we assume Azurite is available at the hostname azurite1 . Write data into azure blob storage using the following : sql INSERT INTO TABLE FUNCTION azureBlobStorage('http://azurite1:10000/devstoreaccount1', 'testcontainer', 'test_{_partition_id}.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto', 'column1 UInt32, column2 UInt32, column3 UInt32') PARTITION BY column3 VALUES (1, 2, 3), (3, 2, 1), (78, 43, 3); And then it can be read using sql SELECT * FROM azureBlobStorage('http://azurite1:10000/devstoreaccount1', 'testcontainer', 'test_1.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto', 'column1 UInt32, column2 UInt32, column3 UInt32'); response β”Œβ”€β”€β”€column1─┬────column2─┬───column3─┐ β”‚ 3 β”‚ 2 β”‚ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ or using connection_string sql SELECT count(*) FROM azureBlobStorage('DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;EndPointSuffix=core.windows.net', 'testcontainer', 'test_3.csv', 'CSV', 'auto' , 'column1 UInt32, column2 UInt32, column3 UInt32'); text β”Œβ”€count()─┐ β”‚ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Virtual Columns {#virtual-columns} _path β€” Path to the file. Type: LowCardinality(String) . _file β€” Name of the file. Type: LowCardinality(String) . _size β€” Size of the file in bytes. Type: Nullable(UInt64) . If the file size is unknown, the value is NULL . _time β€” Last modified time of the file. Type: Nullable(DateTime) . If the time is unknown, the value is NULL . Partitioned Write {#partitioned-write} Partition Strategy {#partition-strategy} Supported for INSERT queries only. WILDCARD (default): Replaces the {_partition_id} wildcard in the file path with the actual partition key. HIVE implements hive style partitioning for reads & writes. It generates files using the following format: <prefix>/<key1=val1/key2=val2...>/<snowflakeid>.<toLower(file_format)> . Example of HIVE partition strategy sql INSERT INTO TABLE FUNCTION azureBlobStorage(azure_conf2, storage_account_url = 'http://localhost:30000/devstoreaccount1', container='cont', blob_path='azure_table_root', format='CSVWithNames', compression='auto', structure='year UInt16, country String, id Int32', partition_strategy='hive') PARTITION BY (year, country) VALUES (2020, 'Russia', 1), (2021, 'Brazil', 2);
{"source_file": "azureBlobStorage.md"}
[ 0.03950485587120056, 0.005952620878815651, -0.14138466119766235, 0.10319695621728897, -0.09744692593812943, 0.013688959181308746, 0.0804021954536438, 0.06467755138874054, 0.027529466897249222, 0.11565994471311569, 0.03196525201201439, -0.0715760663151741, 0.1311332732439041, -0.01477031037...
cc50a9ad-9915-46fe-9ba8-1cf602955056
```result select _path, * from azureBlobStorage(azure_conf2, storage_account_url = 'http://localhost:30000/devstoreaccount1', container='cont', blob_path='azure_table_root/**.csvwithnames') β”Œβ”€_path───────────────────────────────────────────────────────────────────────────┬─id─┬─year─┬─country─┐ 1. β”‚ cont/azure_table_root/year=2021/country=Brazil/7351307847391293440.csvwithnames β”‚ 2 β”‚ 2021 β”‚ Brazil β”‚ 2. β”‚ cont/azure_table_root/year=2020/country=Russia/7351307847378710528.csvwithnames β”‚ 1 β”‚ 2020 β”‚ Russia β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` use_hive_partitioning setting {#hive-style-partitioning} This is a hint for ClickHouse to parse hive style partitioned files upon reading time. It has no effect on writing. For symmetrical reads and writes, use the partition_strategy argument. When setting use_hive_partitioning is set to 1, ClickHouse will detect Hive-style partitioning in the path ( /name=value/ ) and will allow to use partition columns as virtual columns in the query. These virtual columns will have the same names as in the partitioned path, but starting with _ . Example Use virtual column, created with Hive-style partitioning sql SELECT * FROM azureBlobStorage(config, storage_account_url='...', container='...', blob_path='http://data/path/date=*/country=*/code=*/*.parquet') WHERE _date > '2020-01-01' AND _country = 'Netherlands' AND _code = 42; Using Shared Access Signatures (SAS) {#using-shared-access-signatures-sas-sas-tokens} A Shared Access Signature (SAS) is a URI that grants restricted access to an Azure Storage container or file. Use it to provide time-limited access to storage account resources without sharing your storage account key. More details here . The azureBlobStorage function supports Shared Access Signatures (SAS). A Blob SAS token contains all the information needed to authenticate the request, including the target blob, permissions, and validity period. To construct a blob URL, append the SAS token to the blob service endpoint. For example, if the endpoint is https://clickhousedocstest.blob.core.windows.net/ , the request becomes: ```sql SELECT count() FROM azureBlobStorage('BlobEndpoint=https://clickhousedocstest.blob.core.windows.net/;SharedAccessSignature=sp=r&st=2025-01-29T14:58:11Z&se=2025-01-29T22:58:11Z&spr=https&sv=2022-11-02&sr=c&sig=Ac2U0xl4tm%2Fp7m55IilWl1yHwk%2FJG0Uk6rMVuOiD0eE%3D', 'exampledatasets', 'example.csv') β”Œβ”€count()─┐ β”‚ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ 1 row in set. Elapsed: 0.425 sec. ``` Alternatively, users can use the generated Blob SAS URL : ```sql SELECT count() FROM azureBlobStorage('https://clickhousedocstest.blob.core.windows.net/?sp=r&st=2025-01-29T14:58:11Z&se=2025-01-29T22:58:11Z&spr=https&sv=2022-11-02&sr=c&sig=Ac2U0xl4tm%2Fp7m55IilWl1yHwk%2FJG0Uk6rMVuOiD0eE%3D', 'exampledatasets', 'example.csv') β”Œβ”€count()─┐ β”‚ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ 1 row in set. Elapsed: 0.153 sec. ```
{"source_file": "azureBlobStorage.md"}
[ 0.045821670442819595, -0.010476968251168728, -0.020270690321922302, 0.03158612176775932, -0.006200660020112991, -0.0021516212727874517, 0.062265556305646896, -0.018206773325800896, -0.012103104032576084, 0.10131514072418213, -0.006867108400911093, -0.08312325924634933, 0.014152994379401207, ...
fe4dc294-9af0-4d8a-b87e-7edfddd08be2
β”Œβ”€count()─┐ β”‚ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ 1 row in set. Elapsed: 0.153 sec. ``` Related {#related} AzureBlobStorage Table Engine
{"source_file": "azureBlobStorage.md"}
[ -0.0004219755355734378, 0.002540457295253873, -0.0221859123557806, 0.03191665560007095, -0.024068079888820648, 0.07753472775220871, 0.05163882300257683, -0.06703159213066101, 0.04306508228182793, 0.0442541167140007, 0.12556631863117218, -0.047358568757772446, 0.06904257088899612, -0.017882...
13597851-f6d7-4cb4-a9cf-b44b6440bb24
description: 'Returns the table that is connected via ODBC.' sidebar_label: 'odbc' sidebar_position: 150 slug: /sql-reference/table-functions/odbc title: 'odbc' doc_type: 'reference' odbc Table Function Returns table that is connected via ODBC . Syntax {#syntax} sql odbc(datasource, external_database, external_table) odbc(datasource, external_table) odbc(named_collection) Arguments {#arguments} | Argument | Description | |---------------------|------------------------------------------------------------------------| | datasource | Name of the section with connection settings in the odbc.ini file. | | external_database | Name of a database in an external DBMS. | | external_table | Name of a table in the external_database . | These parameters can also be passed using named collections . To safely implement ODBC connections, ClickHouse uses a separate program clickhouse-odbc-bridge . If the ODBC driver is loaded directly from clickhouse-server , driver problems can crash the ClickHouse server. ClickHouse automatically starts clickhouse-odbc-bridge when it is required. The ODBC bridge program is installed from the same package as the clickhouse-server . The fields with the NULL values from the external table are converted into the default values for the base data type. For example, if a remote MySQL table field has the INT NULL type it is converted to 0 (the default value for ClickHouse Int32 data type). Usage Example {#usage-example} Getting data from the local MySQL installation via ODBC This example is checked for Ubuntu Linux 18.04 and MySQL server 5.7. Ensure that unixODBC and MySQL Connector are installed. By default (if installed from packages), ClickHouse starts as user clickhouse . Thus you need to create and configure this user in the MySQL server. bash $ sudo mysql sql mysql> CREATE USER 'clickhouse'@'localhost' IDENTIFIED BY 'clickhouse'; mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'clickhouse' WITH GRANT OPTION; Then configure the connection in /etc/odbc.ini . bash $ cat /etc/odbc.ini [mysqlconn] DRIVER = /usr/local/lib/libmyodbc5w.so SERVER = 127.0.0.1 PORT = 3306 DATABASE = test USERNAME = clickhouse PASSWORD = clickhouse You can check the connection using the isql utility from the unixODBC installation. bash $ isql -v mysqlconn +-------------------------+ | Connected! | | | ... Table in MySQL: ``text mysql> CREATE TABLE test . test ( -> int_id INT NOT NULL AUTO_INCREMENT, -> int_nullable INT NULL DEFAULT NULL, -> float FLOAT NOT NULL, -> float_nullable FLOAT NULL DEFAULT NULL, -> PRIMARY KEY ( int_id`)); Query OK, 0 rows affected (0,09 sec) mysql> insert into test ( int_id , float ) VALUES (1,2); Query OK, 1 row affected (0,00 sec)
{"source_file": "odbc.md"}
[ -0.023056840524077415, -0.027631521224975586, -0.09147054702043533, 0.08817968517541885, -0.03357270732522011, -0.05245263874530792, 0.04023145139217377, 0.06711048632860184, 0.015971947461366653, -0.03050752356648445, 0.0028201851528137922, -0.054949142038822174, 0.02422383800148964, -0.0...
c0d602fc-b05b-4682-9f50-5bb2e5f461d2
mysql> insert into test ( int_id , float ) VALUES (1,2); Query OK, 1 row affected (0,00 sec) mysql> select * from test; +------+----------+-----+----------+ | int_id | int_nullable | float | float_nullable | +------+----------+-----+----------+ | 1 | NULL | 2 | NULL | +------+----------+-----+----------+ 1 row in set (0,00 sec) ``` Retrieving data from the MySQL table in ClickHouse: sql SELECT * FROM odbc('DSN=mysqlconn', 'test', 'test') text β”Œβ”€int_id─┬─int_nullable─┬─float─┬─float_nullable─┐ β”‚ 1 β”‚ 0 β”‚ 2 β”‚ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Related {#see-also} ODBC dictionaries ODBC table engine .
{"source_file": "odbc.md"}
[ -0.016149191185832024, -0.0077901300974190235, -0.047429852187633514, 0.08582255244255066, -0.03395487368106842, -0.0681992620229721, 0.08214083313941956, 0.0000966121515375562, -0.017954736948013306, -0.04969991371035576, 0.10397258400917053, -0.08153335005044937, 0.059850700199604034, -0...
c44f922e-068f-41d9-a173-7549a85cb6a7
description: 'Creates a temporary Merge table. The structure will be derived from underlying tables by using a union of their columns and by deriving common types.' sidebar_label: 'merge' sidebar_position: 130 slug: /sql-reference/table-functions/merge title: 'merge' doc_type: 'reference' merge Table Function Creates a temporary Merge table. The table schema is derived from underlying tables by using a union of their columns and by deriving common types. The same virtual columns are available as for the Merge table engine. Syntax {#syntax} sql merge(['db_name',] 'tables_regexp') Arguments {#arguments} | Argument | Description | |-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | db_name | Possible values (optional, default is currentDatabase() ): - database name, - constant expression that returns a string with a database name, for example, currentDatabase() , - REGEXP(expression) , where expression is a regular expression to match the DB names. | | tables_regexp | A regular expression to match the table names in the specified DB or DBs. | Related {#related} Merge table engine
{"source_file": "merge.md"}
[ 0.006969159934669733, -0.012841575779020786, -0.011233414523303509, 0.03842092305421829, -0.018785834312438965, 0.0020731741096824408, 0.01102438848465681, 0.04465383291244507, -0.02009563520550728, 0.0269001517444849, 0.02797260507941246, -0.03784584626555443, 0.010279906913638115, -0.050...