id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
5a4e38a1-734d-4141-bbb4-6747a950c11c | description: 'Computes the sum of the numbers, using the same data type for the result
as for the input parameters. If the sum exceeds the maximum value for this data
type, it is calculated with overflow.'
sidebar_position: 200
slug: /sql-reference/aggregate-functions/reference/sumwithoverflow
title: 'sumWithOverflow'
doc_type: 'reference'
sumWithOverflow
Computes the sum of the numbers, using the same data type for the result as for the input parameters. If the sum exceeds the maximum value for this data type, it is calculated with overflow.
Only works for numbers.
Syntax
sql
sumWithOverflow(num)
Parameters
-
num
: Column of numeric values.
(U)Int*
,
Float*
,
Decimal*
.
Returned value
The sum of the values.
(U)Int*
,
Float*
,
Decimal*
.
Example
First we create a table
employees
and insert some fictional employee data into it. For this example we will select
salary
as
UInt16
such that a sum of these values may produce an overflow.
Query:
sql
CREATE TABLE employees
(
`id` UInt32,
`name` String,
`monthly_salary` UInt16
)
ENGINE = Log
sql
SELECT
sum(monthly_salary) AS no_overflow,
sumWithOverflow(monthly_salary) AS overflow,
toTypeName(no_overflow),
toTypeName(overflow)
FROM employees
We query for the total amount of the employee salaries using the
sum
and
sumWithOverflow
functions and show their types using the
toTypeName
function.
For the
sum
function the resulting type is
UInt64
, big enough to contain the sum, whilst for
sumWithOverflow
the resulting type remains as
UInt16
.
Query:
sql
SELECT
sum(monthly_salary) AS no_overflow,
sumWithOverflow(monthly_salary) AS overflow,
toTypeName(no_overflow),
toTypeName(overflow),
FROM employees;
Result:
response
ββno_overflowββ¬βoverflowββ¬βtoTypeName(no_overflow)ββ¬βtoTypeName(overflow)ββ
1. β 118700 β 53164 β UInt64 β UInt16 β
βββββββββββββββ΄βββββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββββββββββ | {"source_file": "sumwithoverflow.md"} | [
-0.0489129014313221,
-0.010679388418793678,
0.00658951373770833,
-0.0042188107036054134,
-0.0664258822798729,
-0.04979093000292778,
0.040130406618118286,
0.059534721076488495,
-0.055967703461647034,
0.03317123278975487,
-0.029254894703626633,
-0.006708390545099974,
0.033074911683797836,
-0... |
b498ae1c-1c0e-42be-b8d2-5248dea22bae | description: 'Calculates the sum. Only works for numbers.'
sidebar_position: 195
slug: /sql-reference/aggregate-functions/reference/sum
title: 'sum'
doc_type: 'reference'
sum
Calculates the sum. Only works for numbers.
Syntax
sql
sum(num)
Parameters
-
num
: Column of numeric values.
(U)Int*
,
Float*
,
Decimal*
.
Returned value
The sum of the values.
(U)Int*
,
Float*
,
Decimal*
.
Example
First we create a table
employees
and insert some fictional employee data into it.
Query:
sql
CREATE TABLE employees
(
`id` UInt32,
`name` String,
`salary` UInt32
)
ENGINE = Log
sql
INSERT INTO employees VALUES
(87432, 'John Smith', 45680),
(59018, 'Jane Smith', 72350),
(20376, 'Ivan Ivanovich', 58900),
(71245, 'Anastasia Ivanovna', 89210);
We query for the total amount of the employee salaries using the
sum
function.
Query:
sql
SELECT sum(salary) FROM employees;
Result:
response
ββsum(salary)ββ
1. β 266140 β
βββββββββββββββ | {"source_file": "sum.md"} | [
-0.006891426630318165,
0.04793056845664978,
-0.010731431655585766,
0.03735678642988205,
-0.07880625128746033,
-0.033062905073165894,
0.08419457077980042,
0.04265860840678215,
0.013262951746582985,
0.008403696119785309,
-0.0022709642071276903,
-0.086135633289814,
0.009891972877085209,
-0.06... |
a5692b20-e703-4918-ac4a-502be08926fc | description: 'Aggregate function that calculates the slope between the leftmost and
rightmost points across a group of values.'
sidebar_position: 114
slug: /sql-reference/aggregate-functions/reference/boundingRatio
title: 'boundingRatio'
doc_type: 'reference'
Aggregate function that calculates the slope between the leftmost and rightmost points across a group of values.
Example:
Sample data:
sql
SELECT
number,
number * 1.5
FROM numbers(10)
response
ββnumberββ¬βmultiply(number, 1.5)ββ
β 0 β 0 β
β 1 β 1.5 β
β 2 β 3 β
β 3 β 4.5 β
β 4 β 6 β
β 5 β 7.5 β
β 6 β 9 β
β 7 β 10.5 β
β 8 β 12 β
β 9 β 13.5 β
ββββββββββ΄ββββββββββββββββββββββββ
The boundingRatio() function returns the slope of the line between the leftmost and rightmost points, in the above data these points are
(0,0)
and
(9,13.5)
.
sql
SELECT boundingRatio(number, number * 1.5)
FROM numbers(10)
response
ββboundingRatio(number, multiply(number, 1.5))ββ
β 1.5 β
ββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "boundrat.md"} | [
-0.09170601516962051,
0.016171608120203018,
0.007647182326763868,
-0.022285563871264458,
-0.05475452169775963,
-0.009528092108666897,
-0.02610105462372303,
0.09324724972248077,
-0.03505413606762886,
-0.0030142220202833414,
-0.025449613109230995,
-0.02593214623630047,
0.03300827369093895,
-... |
12cdf172-ba3e-402d-b510-117b1ad6acbf | description: 'Aggregate function that calculates PromQL-like delta over time series data on the specified grid.'
sidebar_position: 221
slug: /sql-reference/aggregate-functions/reference/timeSeriesDeltaToGrid
title: 'timeSeriesDeltaToGrid'
doc_type: 'reference'
Aggregate function that takes time series data as pairs of timestamps and values and calculates
PromQL-like delta
from this data on a regular time grid described by start timestamp, end timestamp and step. For each point on the grid the samples for calculating
delta
are considered within the specified time window.
Parameters:
-
start timestamp
- Specifies start of the grid.
-
end timestamp
- Specifies end of the grid.
-
grid step
- Specifies step of the grid in seconds.
-
staleness
- Specifies the maximum "staleness" in seconds of the considered samples. The staleness window is a left-open and right-closed interval.
Arguments:
-
timestamp
- timestamp of the sample
-
value
- value of the time series corresponding to the
timestamp
Return value:
delta
values on the specified grid as an
Array(Nullable(Float64))
. The returned array contains one value for each time grid point. The value is NULL if there are not enough samples within the window to calculate the delta value for a particular grid point.
Example:
The following query calculates
delta
values on the grid [90, 105, 120, 135, 150, 165, 180, 195, 210]:
sql
WITH
-- NOTE: the gap between 140 and 190 is to show how values are filled for ts = 150, 165, 180 according to window parameter
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values, -- array of values corresponding to timestamps above
90 AS start_ts, -- start of timestamp grid
90 + 120 AS end_ts, -- end of timestamp grid
15 AS step_seconds, -- step of timestamp grid
45 AS window_seconds -- "staleness" window
SELECT timeSeriesDeltaToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)
FROM
(
-- This subquery converts arrays of timestamps and values into rows of `timestamp`, `value`
SELECT
arrayJoin(arrayZip(timestamps, values)) AS ts_and_val,
ts_and_val.1 AS timestamp,
ts_and_val.2 AS value
);
Response:
response
ββtimeSeriesDeltaToGrβ―timestamps, values)ββ
1. β [NULL,NULL,0,3,4.5,3.75,NULL,NULL,3.75] β
βββββββββββββββββββββββββββββββββββββββββββ
Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments:
sql
WITH
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values,
90 AS start_ts,
90 + 120 AS end_ts,
15 AS step_seconds,
45 AS window_seconds
SELECT timeSeriesDeltaToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamps, values); | {"source_file": "timeSeriesDeltaToGrid.md"} | [
-0.04996413737535477,
-0.011235054582357407,
-0.09249354898929596,
0.05855154991149902,
-0.03634314611554146,
-0.034486208111047745,
0.02645544707775116,
0.04442150145769119,
0.032450899481773376,
-0.005986780393868685,
0.004367109853774309,
-0.05203791707754135,
-0.00805729627609253,
-0.0... |
40125278-47b2-4efa-b6fd-664ee8c8b3d4 | :::note
This function is experimental, enable it by setting
allow_experimental_ts_to_grid_aggregate_function=true
.
::: | {"source_file": "timeSeriesDeltaToGrid.md"} | [
-0.046173498034477234,
-0.03719428926706314,
-0.013469633646309376,
0.09186484664678574,
0.02236771211028099,
0.004247533623129129,
0.002288517076522112,
-0.07123781740665436,
0.0007586099090985954,
0.07126602530479431,
-0.00610651820898056,
-0.012566006742417812,
-0.005058068782091141,
-0... |
883e9089-10d4-41c9-b8a1-34bcc5358c9e | description: 'Calculates the exact number of different argument values.'
sidebar_position: 207
slug: /sql-reference/aggregate-functions/reference/uniqexact
title: 'uniqExact'
doc_type: 'reference'
uniqExact
Calculates the exact number of different argument values.
sql
uniqExact(x[, ...])
Use the
uniqExact
function if you absolutely need an exact result. Otherwise use the
uniq
function.
The
uniqExact
function uses more memory than
uniq
, because the size of the state has unbounded growth as the number of different values increases.
Arguments
The function takes a variable number of parameters. Parameters can be
Tuple
,
Array
,
Date
,
DateTime
,
String
, or numeric types.
Example
In this example we'll use the
uniqExact
function to count the number of unique type codes (a short identifier for the type of aircraft) in the
opensky data set
.
sql title="Query"
SELECT uniqExact(typecode) FROM opensky.opensky
response title="Response"
1106
See Also
uniq
uniqCombined
uniqHLL12
uniqTheta | {"source_file": "uniqexact.md"} | [
-0.015662113204598427,
-0.03658933937549591,
-0.10479214787483215,
0.06731155514717102,
-0.017654888331890106,
0.009984062053263187,
0.061131659895181656,
0.03422823175787926,
0.03806232288479805,
-0.017684170976281166,
0.006085252854973078,
0.05160026624798775,
0.04340681806206703,
-0.081... |
d3efb692-2498-4ec4-a524-6aae5af78ef7 | description: 'Returns the cumulative exponential decay over a time series at the index
t
in time.'
sidebar_position: 134
slug: /sql-reference/aggregate-functions/reference/exponentialTimeDecayedCount
title: 'exponentialTimeDecayedCount'
doc_type: 'reference'
exponentialTimeDecayedCount {#exponentialtimedecayedcount}
Returns the cumulative exponential decay over a time series at the index
t
in time.
Syntax
sql
exponentialTimeDecayedCount(x)(t)
Arguments
t
β Time.
Integer
,
Float
or
Decimal
,
DateTime
,
DateTime64
.
Parameters
x
β Half-life period.
Integer
,
Float
or
Decimal
.
Returned values
Returns the cumulative exponential decay at the given point in time.
Float64
.
Example
Query:
sql
SELECT
value,
time,
round(exp_smooth, 3),
bar(exp_smooth, 0, 20, 50) AS bar
FROM
(
SELECT
(number % 5) = 0 AS value,
number AS time,
exponentialTimeDecayedCount(10)(time) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS exp_smooth
FROM numbers(50)
);
Result: | {"source_file": "exponentialtimedecayedcount.md"} | [
-0.06861891597509384,
-0.004511048551648855,
-0.012498000636696815,
0.05653558298945427,
-0.041902266442775726,
-0.10035016387701035,
0.04911942407488823,
0.06072583422064781,
0.060436684638261795,
0.02522929199039936,
0.053115036338567734,
-0.08352230489253998,
0.04032932221889496,
-0.050... |
9c084cff-fcca-4a8a-918d-aa8f682697a7 | response
ββvalueββ¬βtimeββ¬βround(exp_smooth, 3)ββ¬βbarβββββββββββββββββββββββββ
1. β 1 β 0 β 1 β βββ β
2. β 0 β 1 β 1.905 β βββββ β
3. β 0 β 2 β 2.724 β βββββββ β
4. β 0 β 3 β 3.464 β βββββββββ β
5. β 0 β 4 β 4.135 β βββββββββββ β
6. β 1 β 5 β 4.741 β ββββββββββββ β
7. β 0 β 6 β 5.29 β ββββββββββββββ β
8. β 0 β 7 β 5.787 β βββββββββββββββ β
9. β 0 β 8 β 6.236 β ββββββββββββββββ β
10. β 0 β 9 β 6.643 β βββββββββββββββββ β
11. β 1 β 10 β 7.01 β ββββββββββββββββββ β
12. β 0 β 11 β 7.343 β βββββββββββββββββββ β
13. β 0 β 12 β 7.644 β βββββββββββββββββββ β
14. β 0 β 13 β 7.917 β ββββββββββββββββββββ β
15. β 0 β 14 β 8.164 β βββββββββββββββββββββ β
16. β 1 β 15 β 8.387 β βββββββββββββββββββββ β
17. β 0 β 16 β 8.589 β ββββββββββββββββββββββ β
18. β 0 β 17 β 8.771 β ββββββββββββββββββββββ β
19. β 0 β 18 β 8.937 β βββββββββββββββββββββββ β
20. β 0 β 19 β 9.086 β βββββββββββββββββββββββ β
21. β 1 β 20 β 9.222 β βββββββββββββββββββββββ β
22. β 0 β 21 β 9.344 β ββββββββββββββββββββββββ β
23. β 0 β 22 β 9.455 β ββββββββββββββββββββββββ β
24. β 0 β 23 β 9.555 β ββββββββββββββββββββββββ β
25. β 0 β 24 β 9.646 β ββββββββββββββββββββββββ β
26. β 1 β 25 β 9.728 β βββββββββββββββββββββββββ β
27. β 0 β 26 β 9.802 β βββββββββββββββββββββββββ β
28. β 0 β 27 β 9.869 β βββββββββββββββββββββββββ β
29. β 0 β 28 β 9.93 β βββββββββββββββββββββββββ β
30. β 0 β 29 β 9.985 β βββββββββββββββββββββββββ β
31. β 1 β 30 β 10.035 β βββββββββββββββββββββββββ β
32. β 0 β 31 β 10.08 β ββββββββββββββββββββββββββ β
33. β 0 β 32 β 10.121 β ββββββββββββββββββββββββββ β
34. β 0 β 33 β 10.158 β ββββββββββββββββββββββββββ β
35. β 0 β 34 β 10.191 β ββββββββββββββββββββββββββ β
36. β 1 β 35 β 10.221 β ββββββββββββββββββββββββββ β
37. β 0 β 36 β 10.249 β ββββββββββββββββββββββββββ β
38. β 0 β 37 β 10.273 β ββββββββββββββββββββββββββ β
39. β 0 β 38 β 10.296 β ββββββββββββββββββββββββββ β | {"source_file": "exponentialtimedecayedcount.md"} | [
-0.0811409130692482,
-0.0057459245435893536,
-0.013788060285151005,
0.0010913709411397576,
-0.012267502024769783,
-0.12173663824796677,
0.05021408945322037,
-0.04604296013712883,
-0.039399534463882446,
0.04323204606771469,
0.054767969995737076,
-0.044213224202394485,
0.023402059450745583,
... |
c1e79a70-f266-48ac-a64a-caecdc0c4fbe | 38. β 0 β 37 β 10.273 β ββββββββββββββββββββββββββ β
39. β 0 β 38 β 10.296 β ββββββββββββββββββββββββββ β
40. β 0 β 39 β 10.316 β ββββββββββββββββββββββββββ β
41. β 1 β 40 β 10.334 β ββββββββββββββββββββββββββ β
42. β 0 β 41 β 10.351 β ββββββββββββββββββββββββββ β
43. β 0 β 42 β 10.366 β ββββββββββββββββββββββββββ β
44. β 0 β 43 β 10.379 β ββββββββββββββββββββββββββ β
45. β 0 β 44 β 10.392 β ββββββββββββββββββββββββββ β
46. β 1 β 45 β 10.403 β ββββββββββββββββββββββββββ β
47. β 0 β 46 β 10.413 β ββββββββββββββββββββββββββ β
48. β 0 β 47 β 10.422 β ββββββββββββββββββββββββββ β
49. β 0 β 48 β 10.43 β ββββββββββββββββββββββββββ β
50. β 0 β 49 β 10.438 β ββββββββββββββββββββββββββ β
βββββββββ΄βββββββ΄βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββ | {"source_file": "exponentialtimedecayedcount.md"} | [
0.013090052641928196,
0.027009781450033188,
-0.053143396973609924,
-0.009944243356585503,
-0.029208198189735413,
-0.06690146028995514,
0.04524669051170349,
-0.0602625273168087,
-0.0599774494767189,
0.08563370257616043,
0.02175755426287651,
-0.0562860481441021,
0.05144223943352699,
0.001609... |
67c95dd7-8948-4ce1-b236-f8e5c9254a3d | description: 'Calculates the sum of the numbers and counts the number of rows at the
same time. The function is used by ClickHouse query optimizer: if there are multiple
sum
,
count
or
avg
functions in a query, they can be replaced to single
sumCount
function to reuse the calculations. The function is rarely needed to use explicitly.'
sidebar_position: 196
slug: /sql-reference/aggregate-functions/reference/sumcount
title: 'sumCount'
doc_type: 'reference'
Calculates the sum of the numbers and counts the number of rows at the same time. The function is used by ClickHouse query optimizer: if there are multiple
sum
,
count
or
avg
functions in a query, they can be replaced to single
sumCount
function to reuse the calculations. The function is rarely needed to use explicitly.
Syntax
sql
sumCount(x)
Arguments
x
β Input value, must be
Integer
,
Float
, or
Decimal
.
Returned value
Tuple
(sum, count)
, where
sum
is the sum of numbers and
count
is the number of rows with not-NULL values.
Type:
Tuple
.
Example
Query:
sql
CREATE TABLE s_table (x Int8) ENGINE = Log;
INSERT INTO s_table SELECT number FROM numbers(0, 20);
INSERT INTO s_table VALUES (NULL);
SELECT sumCount(x) FROM s_table;
Result:
text
ββsumCount(x)ββ
β (190,20) β
βββββββββββββββ
See also
optimize_syntax_fuse_functions
setting. | {"source_file": "sumcount.md"} | [
0.0023849711287766695,
-0.058808472007513046,
-0.03346343711018562,
0.07448366284370422,
-0.10154607146978378,
0.001842134865000844,
0.06208936125040054,
0.04438867047429085,
-0.002076776698231697,
-0.009158477187156677,
0.008732608519494534,
-0.021163588389754295,
0.09331854432821274,
-0.... |
6aa6ac55-69b8-48f2-9846-caf652979901 | description: 'Computes quantile of a histogram using linear interpolation.'
sidebar_position: 364
slug: /sql-reference/aggregate-functions/reference/quantilePrometheusHistogram
title: 'quantilePrometheusHistogram'
doc_type: 'reference'
quantilePrometheusHistogram
Computes
quantile
of a histogram using linear interpolation, taking into account the cumulative value and upper bounds of each histogram bucket.
To get the interpolated value, all the passed values are combined into an array, which are then sorted by their corresponding bucket upper bound values. Quantile interpolation is then performed similarly to the PromQL
histogram_quantile()
function on a classic histogram, performing a linear interpolation using the lower and upper bound of the bucket in which the quantile position is found.
Syntax
sql
quantilePrometheusHistogram(level)(bucket_upper_bound, cumulative_bucket_value)
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value:
0.5
. At
level=0.5
the function calculates
median
.
bucket_upper_bound
β Upper bounds of the histogram buckets.
The highest bucket must have an upper bound of
+Inf
.
cumulative_bucket_value
β Cumulative
UInt
or
Float64
values of the histogram buckets.
Values must be monotonically increasing as the bucket upper bound increases.
Returned value
Quantile of the specified level.
Type:
Float64
.
Example
Input table:
text
ββbucket_upper_boundββ¬βcumulative_bucket_valueββ
1. β 0 β 6 β
2. β 0.5 β 11 β
3. β 1 β 14 β
4. β inf β 19 β
ββββββββββββββββββββββ΄ββββββββββββββββββββββββββ
Result:
text
ββquantilePrometheusHistogram(bucket_upper_bound, cumulative_bucket_value)ββ
1. β 0.35 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantileprometheushistogram.md"} | [
-0.021258721128106117,
0.025522960349917412,
-0.014813249930739403,
-0.06842861324548721,
-0.1393246203660965,
-0.04162752628326416,
0.03408335521817207,
0.056613024324178696,
-0.05252625420689583,
-0.020484860986471176,
-0.047660890966653824,
-0.0583522729575634,
0.055750180035829544,
-0.... |
a1a79f5e-f8b9-4cf0-9982-af035c15f3d3 | description: 'Calculate the sample variance of a data set. Unlike
varSamp
, this
function uses a numerically stable algorithm. It works slower but provides a lower
computational error.'
sidebar_position: 213
slug: /sql-reference/aggregate-functions/reference/varsampstable
title: 'varSampStable'
doc_type: 'reference'
varSampStable {#varsampstable}
Calculate the sample variance of a data set. Unlike
varSamp
, this function uses a numerically stable algorithm. It works slower but provides a lower computational error.
Syntax
sql
varSampStable(x)
Alias:
VAR_SAMP_STABLE
Parameters
x
: The population for which you want to calculate the sample variance.
(U)Int*
,
Float*
,
Decimal*
.
Returned value
Returns the sample variance of the input data set.
Float64
.
Implementation details
The
varSampStable
function calculates the sample variance using the same formula as the
varSamp
:
$$
\sum\frac{(x - \text{mean}(x))^2}{(n - 1)}
$$
Where:
-
x
is each individual data point in the data set.
-
mean(x)
is the arithmetic mean of the data set.
-
n
is the number of data points in the data set.
Example
Query:
```sql
DROP TABLE IF EXISTS test_data;
CREATE TABLE test_data
(
x Float64
)
ENGINE = Memory;
INSERT INTO test_data VALUES (10.5), (12.3), (9.8), (11.2), (10.7);
SELECT round(varSampStable(x),3) AS var_samp_stable FROM test_data;
```
Response:
response
ββvar_samp_stableββ
β 0.865 β
βββββββββββββββββββ | {"source_file": "varsampstable.md"} | [
0.01457570306956768,
-0.05414803326129913,
-0.00276689394377172,
-0.005994800012558699,
-0.07400748133659363,
0.016895301640033722,
0.028349503874778748,
0.0996118113398552,
-0.03882778435945511,
0.03693120554089546,
-0.012219822965562344,
0.04798821359872818,
0.03244689479470253,
-0.05741... |
44a3afb6-9b85-4a0b-80af-519e3e1d3896 | description: 'Returns an array of the approximately most frequent values in the specified
column. The resulting array is sorted in descending order of approximate frequency
of values (not by the values themselves).'
sidebar_position: 202
slug: /sql-reference/aggregate-functions/reference/topk
title: 'topK'
doc_type: 'reference'
topK
Returns an array of the approximately most frequent values in the specified column. The resulting array is sorted in descending order of approximate frequency of values (not by the values themselves).
Implements the
Filtered Space-Saving
algorithm for analyzing TopK, based on the reduce-and-combine algorithm from
Parallel Space Saving
.
sql
topK(N)(column)
topK(N, load_factor)(column)
topK(N, load_factor, 'counts')(column)
This function does not provide a guaranteed result. In certain situations, errors might occur and it might return frequent values that aren't the most frequent values.
We recommend using the
N < 10
value; performance is reduced with large
N
values. Maximum value of
N = 65536
.
Parameters
N
β The number of elements to return. Optional. Default value: 10.
load_factor
β Defines, how many cells reserved for values. If uniq(column) > N * load_factor, result of topK function will be approximate. Optional. Default value: 3.
counts
β Defines, should result contain approximate count and error value.
Arguments
column
β The value to calculate frequency.
Example
Take the
OnTime
data set and select the three most frequently occurring values in the
AirlineID
column.
sql
SELECT topK(3)(AirlineID) AS res
FROM ontime
text
ββresββββββββββββββββββ
β [19393,19790,19805] β
βββββββββββββββββββββββ
See Also
topKWeighted
approx_top_k
approx_top_sum | {"source_file": "topk.md"} | [
0.020301032811403275,
0.0076744891703128815,
-0.03572198376059532,
0.009395717643201351,
-0.0499812550842762,
-0.013018460012972355,
0.004315032158046961,
0.08992984145879745,
-0.005893934052437544,
0.03367584943771362,
-0.027877407148480415,
0.05522505193948746,
0.06543464958667755,
-0.13... |
a07e3e89-3cb6-499b-8921-196a61cb15a9 | description: 'Aggregate function that calculates the positions of the occurrences
of the maxIntersections function.'
sidebar_position: 164
slug: /sql-reference/aggregate-functions/reference/maxintersectionsposition
title: 'maxIntersectionsPosition'
doc_type: 'reference'
maxIntersectionsPosition
Aggregate function that calculates the positions of the occurrences of the
maxIntersections
function
.
The syntax is:
sql
maxIntersectionsPosition(start_column, end_column)
Arguments
start_column
β the numeric column that represents the start of each interval. If
start_column
is
NULL
or 0 then the interval will be skipped.
end_column
- the numeric column that represents the end of each interval. If
end_column
is
NULL
or 0 then the interval will be skipped.
Returned value
Returns the start positions of the maximum number of intersected intervals.
Example
```sql
CREATE TABLE my_events (
start UInt32,
end UInt32
)
ENGINE = MergeTree
ORDER BY tuple();
INSERT INTO my_events VALUES
(1, 3),
(1, 6),
(2, 5),
(3, 7);
```
The intervals look like the following:
response
1 - 3
1 - - - - 6
2 - - 5
3 - - - 7
Notice that three of these intervals have the value 4 in common, and that starts with the 2nd interval:
sql
SELECT maxIntersectionsPosition(start, end) FROM my_events;
Response:
response
2
In other words, the
(1,6)
row is the start of the 3 intervals that intersect, and 3 is the maximum number of intervals that intersect. | {"source_file": "maxintersectionsposition.md"} | [
0.030591463670134544,
-0.044753581285476685,
0.04945719614624977,
-0.03891758620738983,
-0.10085207223892212,
0.001251472276635468,
0.004574815277010202,
0.06635361164808273,
0.00670063029974699,
-0.01707855612039566,
0.0214480459690094,
-0.010569947771728039,
0.01812306046485901,
-0.07574... |
4421af9a-7d95-4450-a618-7470b317d3b9 | description: 'The result is equal to the square root of varSamp. Unlike this function
uses a numerically stable algorithm.'
sidebar_position: 191
slug: /sql-reference/aggregate-functions/reference/stddevsampstable
title: 'stddevSampStable'
doc_type: 'reference'
stddevSampStable
The result is equal to the square root of
varSamp
. Unlike
stddevSamp
this function uses a numerically stable algorithm. It works slower but provides a lower computational error.
Syntax
sql
stddevSampStable(x)
Parameters
x
: Values for which to find the square root of sample variance.
(U)Int*
,
Float*
,
Decimal*
.
Returned value
Square root of sample variance of
x
.
Float64
.
Example
Query:
```sql
DROP TABLE IF EXISTS test_data;
CREATE TABLE test_data
(
population UInt8,
)
ENGINE = Log;
INSERT INTO test_data VALUES (3),(3),(3),(4),(4),(5),(5),(7),(11),(15);
SELECT
stddevSampStable(population)
FROM test_data;
```
Result:
response
ββstddevSampStable(population)ββ
β 4 β
ββββββββββββββββββββββββββββββββ | {"source_file": "stddevsampstable.md"} | [
0.0006219953647814691,
-0.0056455498561263084,
0.008973226882517338,
0.008364505134522915,
-0.05991605669260025,
-0.029986124485731125,
0.01201719418168068,
0.09090496599674225,
0.005566076375544071,
0.007354688830673695,
0.04705105721950531,
0.021823061630129814,
0.06065724417567253,
-0.1... |
bda7ea22-db27-42cf-b676-d02809808665 | description: 'Calculates the population variance.'
sidebar_position: 210
slug: /sql-reference/aggregate-functions/reference/varPop
title: 'varPop'
doc_type: 'reference'
varPop {#varpop}
Calculates the population variance:
$$
\frac{\Sigma{(x - \bar{x})^2}}{n}
$$
Syntax
sql
varPop(x)
Alias:
VAR_POP
.
Parameters
x
: Population of values to find the population variance of.
(U)Int*
,
Float*
,
Decimal*
.
Returned value
Returns the population variance of
x
.
Float64
.
Example
Query:
```sql
DROP TABLE IF EXISTS test_data;
CREATE TABLE test_data
(
x UInt8,
)
ENGINE = Memory;
INSERT INTO test_data VALUES (3), (3), (3), (4), (4), (5), (5), (7), (11), (15);
SELECT
varPop(x) AS var_pop
FROM test_data;
```
Result:
response
ββvar_popββ
β 14.4 β
βββββββββββ | {"source_file": "varpop.md"} | [
0.01441159751266241,
-0.011547520756721497,
0.03168749064207077,
0.08752000331878662,
-0.0748889371752739,
-0.043936628848314285,
0.050075940787792206,
0.08864180743694305,
0.03219408169388771,
0.004355562385171652,
0.09388227015733719,
-0.0512017086148262,
0.05412653088569641,
-0.12747351... |
e79a0054-4460-4732-959d-ebaf28e1bc24 | description: 'Exactly computes the quantile of a numeric data sequence, taking into
account the weight of each element.'
sidebar_position: 174
slug: /sql-reference/aggregate-functions/reference/quantileexactweighted
title: 'quantileExactWeighted'
doc_type: 'reference'
quantileExactWeighted
Exactly computes the
quantile
of a numeric data sequence, taking into account the weight of each element.
To get exact value, all the passed values ββare combined into an array, which is then partially sorted. Each value is counted with its weight, as if it is present
weight
times. A hash table is used in the algorithm. Because of this, if the passed values ββare frequently repeated, the function consumes less RAM than
quantileExact
. You can use this function instead of
quantileExact
and specify the weight 1.
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
Syntax
sql
quantileExactWeighted(level)(expr, weight)
Alias:
medianExactWeighted
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
weight
β Column with weights of sequence members. Weight is a number of value occurrences with
Unsigned integer types
.
Returned value
Quantile of the specified level.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Input table:
text
ββnββ¬βvalββ
β 0 β 3 β
β 1 β 2 β
β 2 β 1 β
β 5 β 4 β
βββββ΄ββββββ
Query:
sql
SELECT quantileExactWeighted(n, val) FROM t
Result:
text
ββquantileExactWeighted(n, val)ββ
β 1 β
βββββββββββββββββββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantileexactweighted.md"} | [
-0.042476534843444824,
0.011839677579700947,
0.027364088222384453,
0.02320001646876335,
-0.10492178797721863,
-0.07570492476224899,
0.04338052496314049,
0.07052642852067947,
0.008163739927113056,
0.0028900306206196547,
-0.003398703644052148,
-0.03844388946890831,
0.03750813379883766,
-0.06... |
1698871d-1821-4a4b-bb44-019aa9712cd2 | description: 'Returns the population covariance matrix over N variables.'
sidebar_position: 122
slug: /sql-reference/aggregate-functions/reference/covarpopmatrix
title: 'covarPopMatrix'
doc_type: 'reference'
covarPopMatrix
Returns the population covariance matrix over N variables.
Syntax
sql
covarPopMatrix(x[, ...])
Arguments
x
β a variable number of parameters.
(U)Int*
,
Float*
,
Decimal
.
Returned Value
Population covariance matrix.
Array
(
Array
(
Float64
)).
Example
Query:
sql
DROP TABLE IF EXISTS test;
CREATE TABLE test
(
a UInt32,
b Float64,
c Float64,
d Float64
)
ENGINE = Memory;
INSERT INTO test(a, b, c, d) VALUES (1, 5.6, -4.4, 2.6), (2, -9.6, 3, 3.3), (3, -1.3, -4, 1.2), (4, 5.3, 9.7, 2.3), (5, 4.4, 0.037, 1.222), (6, -8.6, -7.8, 2.1233), (7, 5.1, 9.3, 8.1222), (8, 7.9, -3.6, 9.837), (9, -8.2, 0.62, 8.43555), (10, -3, 7.3, 6.762);
sql
SELECT arrayMap(x -> round(x, 3), arrayJoin(covarPopMatrix(a, b, c, d))) AS covarPopMatrix
FROM test;
Result:
reference
ββcovarPopMatrixβββββββββββββ
1. β [8.25,-1.76,4.08,6.748] β
2. β [-1.76,41.07,6.486,2.132] β
3. β [4.08,6.486,34.21,4.755] β
4. β [6.748,2.132,4.755,9.93] β
βββββββββββββββββββββββββββββ | {"source_file": "covarpopmatrix.md"} | [
0.048841264098882675,
-0.008885180577635765,
-0.07834908366203308,
0.04722127318382263,
-0.06126659736037254,
-0.02794906497001648,
0.030040768906474113,
-0.05244719609618187,
-0.0613013356924057,
0.022329004481434822,
0.10194358229637146,
-0.0630585253238678,
0.025064686313271523,
-0.0858... |
2099e10b-638a-40c2-97c4-d0f9dc79d4da | description: 'The function plots a frequency histogram for values
x
and the repetition
rate
y
of these values over the interval
[min_x, max_x]
.'
sidebar_label: 'sparkbar'
sidebar_position: 187
slug: /sql-reference/aggregate-functions/reference/sparkbar
title: 'sparkbar'
doc_type: 'reference'
sparkbar
The function plots a frequency histogram for values
x
and the repetition rate
y
of these values over the interval
[min_x, max_x]
.
Repetitions for all
x
falling into the same bucket are averaged, so data should be pre-aggregated.
Negative repetitions are ignored.
If no interval is specified, then the minimum
x
is used as the interval start, and the maximum
x
β as the interval end.
Otherwise, values outside the interval are ignored.
Syntax
sql
sparkbar(buckets[, min_x, max_x])(x, y)
Parameters
buckets
β The number of segments. Type:
Integer
.
min_x
β The interval start. Optional parameter.
max_x
β The interval end. Optional parameter.
Arguments
x
β The field with values.
y
β The field with the frequency of values.
Returned value
The frequency histogram.
Example
Query:
``sql
CREATE TABLE spark_bar_data (
value
Int64,
event_date` Date) ENGINE = MergeTree ORDER BY event_date;
INSERT INTO spark_bar_data VALUES (1,'2020-01-01'), (3,'2020-01-02'), (4,'2020-01-02'), (-3,'2020-01-02'), (5,'2020-01-03'), (2,'2020-01-04'), (3,'2020-01-05'), (7,'2020-01-06'), (6,'2020-01-07'), (8,'2020-01-08'), (2,'2020-01-11');
SELECT sparkbar(9)(event_date,cnt) FROM (SELECT sum(value) as cnt, event_date FROM spark_bar_data GROUP BY event_date);
SELECT sparkbar(9, toDate('2020-01-01'), toDate('2020-01-10'))(event_date,cnt) FROM (SELECT sum(value) as cnt, event_date FROM spark_bar_data GROUP BY event_date);
```
Result:
```text
ββsparkbar(9)(event_date, cnt)ββ
β ββ
ββββ β β
ββββββββββββββββββββββββββββββββ
ββsparkbar(9, toDate('2020-01-01'), toDate('2020-01-10'))(event_date, cnt)ββ
β ββ
βββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
The alias for this function is sparkBar. | {"source_file": "sparkbar.md"} | [
0.03434651345014572,
-0.048605743795633316,
0.029800422489643097,
-0.027471965178847313,
-0.08937704563140869,
0.042223427444696426,
0.040146324783563614,
0.06387205421924591,
-0.026488956063985825,
-0.06143094226717949,
-0.011623498983681202,
-0.03322219103574753,
0.07187943160533905,
-0.... |
a8c6a5bf-2e9e-43c1-8fee-12c6a8283d61 | description: 'The
contingency
function calculates the contingency coefficient, a
value that measures the association between two columns in a table. The computation
is similar to the
cramersV
function but with a different denominator in the square
root.'
sidebar_position: 116
slug: /sql-reference/aggregate-functions/reference/contingency
title: 'contingency'
doc_type: 'reference'
contingency
The
contingency
function calculates the
contingency coefficient
, a value that measures the association between two columns in a table. The computation is similar to
the
cramersV
function
but with a different denominator in the square root.
Syntax
sql
contingency(column1, column2)
Arguments
column1
and
column2
are the columns to be compared
Returned value
a value between 0 and 1. The larger the result, the closer the association of the two columns.
Return type
is always
Float64
.
Example
The two columns being compared below have a small association with each other. We have included the result of
cramersV
also (as a comparison):
sql
SELECT
cramersV(a, b),
contingency(a ,b)
FROM
(
SELECT
number % 10 AS a,
number % 4 AS b
FROM
numbers(150)
);
Result:
response
ββββββcramersV(a, b)ββ¬ββcontingency(a, b)ββ
β 0.5798088336225178 β 0.0817230766271248 β
ββββββββββββββββββββββ΄βββββββββββββββββββββ | {"source_file": "contingency.md"} | [
-0.03282719850540161,
-0.04547450691461563,
-0.07795868068933487,
-0.0062669082544744015,
0.013951854780316353,
-0.008669576607644558,
0.05069805309176445,
0.08309207111597061,
-0.014449269510805607,
0.002070343354716897,
0.00815511867403984,
-0.02217782847583294,
0.04475617781281471,
-0.0... |
0b7edeca-999d-4f62-936d-de2ea75ce2b2 | description: 'This function implements stochastic linear regression. It supports custom
parameters for learning rate, L2 regularization coefficient, mini-batch size, and
has a few methods for updating weights (Adam, simple SGD, Momentum, Nesterov.)'
sidebar_position: 192
slug: /sql-reference/aggregate-functions/reference/stochasticlinearregression
title: 'stochasticLinearRegression'
doc_type: 'reference'
stochasticLinearRegression {#agg_functions_stochasticlinearregression_parameters}
This function implements stochastic linear regression. It supports custom parameters for learning rate, L2 regularization coefficient, mini-batch size, and has a few methods for updating weights (
Adam
(used by default),
simple SGD
,
Momentum
, and
Nesterov
).
Parameters {#parameters}
There are 4 customizable parameters. They are passed to the function sequentially, but there is no need to pass all four - default values will be used, however good model required some parameter tuning.
text
stochasticLinearRegression(0.00001, 0.1, 15, 'Adam')
learning rate
is the coefficient on step length, when the gradient descent step is performed. A learning rate that is too big may cause infinite weights of the model. Default is
0.00001
.
l2 regularization coefficient
which may help to prevent overfitting. Default is
0.1
.
mini-batch size
sets the number of elements, which gradients will be computed and summed to perform one step of gradient descent. Pure stochastic descent uses one element, however, having small batches (about 10 elements) makes gradient steps more stable. Default is
15
.
method for updating weights
, they are:
Adam
(by default),
SGD
,
Momentum
, and
Nesterov
.
Momentum
and
Nesterov
require a little bit more computations and memory, however, they happen to be useful in terms of speed of convergence and stability of stochastic gradient methods.
Usage {#usage}
stochasticLinearRegression
is used in two steps: fitting the model and predicting on new data. In order to fit the model and save its state for later usage, we use the
-State
combinator, which saves the state (e.g. model weights).
To predict, we use the function
evalMLMethod
, which takes a state as an argument as well as features to predict on.
1.
Fitting
Such query may be used.
```sql
CREATE TABLE IF NOT EXISTS train_data
(
param1 Float64,
param2 Float64,
target Float64
) ENGINE = Memory;
CREATE TABLE your_model ENGINE = Memory AS SELECT
stochasticLinearRegressionState(0.1, 0.0, 5, 'SGD')(target, param1, param2)
AS state FROM train_data;
```
Here, we also need to insert data into the
train_data
table. The number of parameters is not fixed, it depends only on the number of arguments passed into
linearRegressionState
. They all must be numeric values.
Note that the column with target value (which we would like to learn to predict) is inserted as the first argument.
2.
Predicting | {"source_file": "stochasticlinearregression.md"} | [
-0.0635216012597084,
-0.050037141889333725,
-0.027188513427972794,
0.04793005809187889,
-0.05011715367436409,
0.02868536114692688,
0.0003018066636286676,
0.008218887262046337,
-0.02873709797859192,
-0.04357236996293068,
0.03567155450582504,
0.041374996304512024,
-0.052998218685388565,
-0.1... |
b78ec9aa-e919-4d04-9fd6-c2074670afb6 | 2.
Predicting
After saving a state into the table, we may use it multiple times for prediction or even merge with other states and create new, even better models.
sql
WITH (SELECT state FROM your_model) AS model SELECT
evalMLMethod(model, param1, param2) FROM test_data
The query will return a column of predicted values. Note that first argument of
evalMLMethod
is
AggregateFunctionState
object, next are columns of features.
test_data
is a table like
train_data
but may not contain target value.
Notes {#notes}
To merge two models user may create such query:
sql SELECT state1 + state2 FROM your_models
where
your_models
table contains both models. This query will return new
AggregateFunctionState
object.
User may fetch weights of the created model for its own purposes without saving the model if no
-State
combinator is used.
sql SELECT stochasticLinearRegression(0.01)(target, param1, param2) FROM train_data
Such query will fit the model and return its weights - first are weights, which correspond to the parameters of the model, the last one is bias. So in the example above the query will return a column with 3 values.
See Also
stochasticLogisticRegression
Difference between linear and logistic regressions | {"source_file": "stochasticlinearregression.md"} | [
-0.08581724762916565,
-0.06367165595293045,
-0.06363050639629364,
0.13091273605823517,
0.0318668931722641,
0.0033726077526807785,
0.06034671515226364,
0.05786766856908798,
-0.044739291071891785,
-0.0590912401676178,
0.03540068492293358,
-0.05090764909982681,
0.05331723019480705,
-0.0630561... |
2fa6df3f-7e90-4ba4-8442-e8ea4b578d92 | description: 'Provides a statistical test for one-way analysis of variance (ANOVA
test). It is a test over several groups of normally distributed observations to
find out whether all groups have the same mean or not.'
sidebar_position: 101
slug: /sql-reference/aggregate-functions/reference/analysis_of_variance
title: 'analysisOfVariance'
doc_type: 'reference'
analysisOfVariance
Provides a statistical test for one-way analysis of variance (ANOVA test). It is a test over several groups of normally distributed observations to find out whether all groups have the same mean or not.
Syntax
sql
analysisOfVariance(val, group_no)
Aliases:
anova
Parameters
-
val
: value.
-
group_no
: group number that
val
belongs to.
:::note
Groups are enumerated starting from 0 and there should be at least two groups to perform a test.
There should be at least one group with the number of observations greater than one.
:::
Returned value
(f_statistic, p_value)
.
Tuple
(
Float64
,
Float64
).
Example
Query:
sql
SELECT analysisOfVariance(number, number % 2) FROM numbers(1048575);
Result:
response
ββanalysisOfVariance(number, modulo(number, 2))ββ
β (0,1) β
βββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "analysis_of_variance.md"} | [
0.028773048892617226,
-0.045236989855766296,
0.03252045065164566,
0.00647842837497592,
-0.05693143233656883,
-0.03499006852507591,
0.05444081500172615,
0.03470967337489128,
0.030358390882611275,
0.017726799473166466,
0.020084723830223083,
-0.021485844627022743,
0.004641978535801172,
-0.015... |
6278725c-93cd-4ae2-bb2b-c7ede03b0c17 | description: 'Aggregate function that calculates PromQL-like resets over time series data on the specified grid.'
sidebar_position: 230
slug: /sql-reference/aggregate-functions/reference/timeSeriesResetsToGrid
title: 'timeSeriesResetsToGrid'
doc_type: 'reference'
Aggregate function that takes time series data as pairs of timestamps and values and calculates
PromQL-like resets
from this data on a regular time grid described by start timestamp, end timestamp and step. For each point on the grid the samples for calculating
resets
are considered within the specified time window.
Parameters:
-
start timestamp
- specifies start of the grid
-
end timestamp
- specifies end of the grid
-
grid step
- specifies step of the grid in seconds
-
staleness
- specified the maximum "staleness" in seconds of the considered samples
Arguments:
-
timestamp
- timestamp of the sample
-
value
- value of the time series corresponding to the
timestamp
Return value:
resets
values on the specified grid as an
Array(Nullable(Float64))
. The returned array contains one value for each time grid point. The value is NULL if there are no samples within the window to calculate the resets value for a particular grid point.
Example:
The following query calculates
resets
values on the grid [90, 105, 120, 135, 150, 165, 180, 195, 210, 225]:
sql
WITH
-- NOTE: the gap between 130 and 190 is to show how values are filled for ts = 180 according to window parameter
[110, 120, 130, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 3, 2, 6, 6, 4, 2, 0]::Array(Float32) AS values, -- array of values corresponding to timestamps above
90 AS start_ts, -- start of timestamp grid
90 + 135 AS end_ts, -- end of timestamp grid
15 AS step_seconds, -- step of timestamp grid
45 AS window_seconds -- "staleness" window
SELECT timeSeriesResetsToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)
FROM
(
-- This subquery converts arrays of timestamps and values into rows of `timestamp`, `value`
SELECT
arrayJoin(arrayZip(timestamps, values)) AS ts_and_val,
ts_and_val.1 AS timestamp,
ts_and_val.2 AS value
);
Response:
response
ββtimeSeriesResetsToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)ββ
1. β [NULL,NULL,0,1,1,1,NULL,0,1,2] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments:
sql
WITH
[110, 120, 130, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 3, 2, 6, 6, 4, 2, 0]::Array(Float32) AS values,
90 AS start_ts,
90 + 135 AS end_ts,
15 AS step_seconds,
45 AS window_seconds
SELECT timeSeriesResetsToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamps, values); | {"source_file": "timeSeriesResetsToGrid.md"} | [
-0.12168758362531662,
-0.006407720968127251,
-0.08571506291627884,
0.08752690255641937,
-0.06638512015342712,
-0.0035526244901120663,
0.0008013423648662865,
0.05492984130978584,
0.034373145550489426,
-0.006875900086015463,
-0.05007757991552353,
-0.02687799744307995,
-0.022512245923280716,
... |
93af4381-ce27-4e6d-a55c-f301013099ec | :::note
This function is experimental, enable it by setting
allow_experimental_ts_to_grid_aggregate_function=true
.
::: | {"source_file": "timeSeriesResetsToGrid.md"} | [
-0.04617350548505783,
-0.037194281816482544,
-0.013469654135406017,
0.09186480939388275,
0.022367699071764946,
0.004247533623129129,
0.002288512885570526,
-0.07123781740665436,
0.0007586118299514055,
0.07126599550247192,
-0.006106517277657986,
-0.012565995566546917,
-0.00505802920088172,
-... |
93122194-ff39-4cd5-b25b-742d66f94303 | description: 'Calculates a concatenated string from a group of strings, optionally
separated by a delimiter, and optionally limited by a maximum number of elements.'
sidebar_label: 'groupConcat'
sidebar_position: 363
slug: /sql-reference/aggregate-functions/reference/groupconcat
title: 'groupConcat'
doc_type: 'reference'
Calculates a concatenated string from a group of strings, optionally separated by a delimiter, and optionally limited by a maximum number of elements.
Syntax
sql
groupConcat[(delimiter [, limit])](expression);
Alias:
group_concat
Arguments
expression
β The expression or column name that outputs strings to be concatenated.
delimiter
β A
string
that will be used to separate concatenated values. This parameter is optional and defaults to an empty string or delimiter from parameters if not specified.
Parameters
delimiter
β A
string
that will be used to separate concatenated values. This parameter is optional and defaults to an empty string if not specified.
limit
β A positive
integer
specifying the maximum number of elements to concatenate. If more elements are present, excess elements are ignored. This parameter is optional.
:::note
If delimiter is specified without limit, it must be the first parameter. If both delimiter and limit are specified, delimiter must precede limit.
Also, if different delimiters are specified as parameters and arguments, the delimiter from arguments will be used only.
:::
Returned value
Returns a
string
consisting of the concatenated values of the column or expression. If the group has no elements or only null elements, and the function does not specify a handling for only null values, the result is a nullable string with a null value.
Examples
Input table:
text
ββidββ¬βnameββ
β 1 β John β
β 2 β Jane β
β 3 β Bob β
ββββββ΄βββββββ
Basic usage without a delimiter:
Query:
sql
SELECT groupConcat(Name) FROM Employees;
Result:
text
JohnJaneBob
This concatenates all names into one continuous string without any separator.
Using comma as a delimiter:
Query:
sql
SELECT groupConcat(', ')(Name) FROM Employees;
or
sql
SELECT groupConcat(Name, ', ') FROM Employees;
Result:
text
John, Jane, Bob
This output shows the names separated by a comma followed by a space.
Limiting the number of concatenated elements
Query:
sql
SELECT groupConcat(', ', 2)(Name) FROM Employees;
Result:
text
John, Jane
This query limits the output to the first two names, even though there are more names in the table. | {"source_file": "groupconcat.md"} | [
0.02452375739812851,
0.014115409925580025,
-0.03517678752541542,
0.0429806262254715,
-0.05935696139931679,
0.024213887751102448,
0.08726494014263153,
0.05176921188831329,
-0.027824338525533676,
-0.042496562004089355,
0.051443979144096375,
-0.0346212200820446,
0.05917811021208763,
-0.135179... |
9bed5354-add2-452a-8040-7e5c66f6adc7 | description: 'Returns the maximum of the computed exponentially smoothed moving average
at index
t
in time with that at
t-1
. '
sidebar_position: 135
slug: /sql-reference/aggregate-functions/reference/exponentialTimeDecayedMax
title: 'exponentialTimeDecayedMax'
doc_type: 'reference'
exponentialTimeDecayedMax {#exponentialtimedecayedmax}
Returns the maximum of the computed exponentially smoothed moving average at index
t
in time with that at
t-1
.
Syntax
sql
exponentialTimeDecayedMax(x)(value, timeunit)
Arguments
value
β Value.
Integer
,
Float
or
Decimal
.
timeunit
β Timeunit.
Integer
,
Float
or
Decimal
,
DateTime
,
DateTime64
.
Parameters
x
β Half-life period.
Integer
,
Float
or
Decimal
.
Returned values
Returns the maximum of the exponentially smoothed weighted moving average at
t
and
t-1
.
Float64
.
Example
Query:
sql
SELECT
value,
time,
round(exp_smooth, 3),
bar(exp_smooth, 0, 5, 50) AS bar
FROM
(
SELECT
(number = 0) OR (number >= 25) AS value,
number AS time,
exponentialTimeDecayedMax(10)(value, time) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS exp_smooth
FROM numbers(50)
);
Result: | {"source_file": "exponentialtimedecayedmax.md"} | [
-0.07823941111564636,
-0.059251975268125534,
0.010187679901719093,
-0.0018608906539157033,
-0.05627407878637314,
-0.10420355200767517,
0.07351403683423996,
0.07090736925601959,
0.015245992690324783,
0.015106003731489182,
0.022353006526827812,
-0.07012749463319778,
0.059211935847997665,
-0.... |
9324c5c6-e1b3-49e2-b4ea-327532fd0356 | Result:
response
ββvalueββ¬βtimeββ¬βround(exp_smooth, 3)ββ¬βbarβββββββββ
1. β 1 β 0 β 1 β ββββββββββ β
2. β 0 β 1 β 0.905 β βββββββββ β
3. β 0 β 2 β 0.819 β βββββββββ β
4. β 0 β 3 β 0.741 β ββββββββ β
5. β 0 β 4 β 0.67 β βββββββ β
6. β 0 β 5 β 0.607 β ββββββ β
7. β 0 β 6 β 0.549 β ββββββ β
8. β 0 β 7 β 0.497 β βββββ β
9. β 0 β 8 β 0.449 β βββββ β
10. β 0 β 9 β 0.407 β ββββ β
11. β 0 β 10 β 0.368 β ββββ β
12. β 0 β 11 β 0.333 β ββββ β
13. β 0 β 12 β 0.301 β βββ β
14. β 0 β 13 β 0.273 β βββ β
15. β 0 β 14 β 0.247 β βββ β
16. β 0 β 15 β 0.223 β βββ β
17. β 0 β 16 β 0.202 β ββ β
18. β 0 β 17 β 0.183 β ββ β
19. β 0 β 18 β 0.165 β ββ β
20. β 0 β 19 β 0.15 β ββ β
21. β 0 β 20 β 0.135 β ββ β
22. β 0 β 21 β 0.122 β ββ β
23. β 0 β 22 β 0.111 β β β
24. β 0 β 23 β 0.1 β β β
25. β 0 β 24 β 0.091 β β β
26. β 1 β 25 β 1 β ββββββββββ β
27. β 1 β 26 β 1 β ββββββββββ β
28. β 1 β 27 β 1 β ββββββββββ β
29. β 1 β 28 β 1 β ββββββββββ β
30. β 1 β 29 β 1 β ββββββββββ β
31. β 1 β 30 β 1 β ββββββββββ β
32. β 1 β 31 β 1 β ββββββββββ β
33. β 1 β 32 β 1 β ββββββββββ β
34. β 1 β 33 β 1 β ββββββββββ β
35. β 1 β 34 β 1 β ββββββββββ β
36. β 1 β 35 β 1 β ββββββββββ β
37. β 1 β 36 β 1 β ββββββββββ β
38. β 1 β 37 β 1 β ββββββββββ β
39. β 1 β 38 β 1 β ββββββββββ β
40. β 1 β 39 β 1 β ββββββββββ β
41. β 1 β 40 β 1 β ββββββββββ β
42. β 1 β 41 β 1 β ββββββββββ β
43. β 1 β 42 β 1 β ββββββββββ β
44. β 1 β 43 β 1 β ββββββββββ β
45. β 1 β 44 β 1 β ββββββββββ β
46. β 1 β 45 β 1 β ββββββββββ β
47. β 1 β 46 β 1 β ββββββββββ β
48. β 1 β 47 β 1 β ββββββββββ β
49. β 1 β 48 β 1 β ββββββββββ β
50. β 1 β 49 β 1 β ββββββββββ β
βββββββββ΄βββββββ΄βββββββββββββββββββββββ΄βββββββββββββ | {"source_file": "exponentialtimedecayedmax.md"} | [
-0.07088932394981384,
-0.004207790829241276,
-0.01858203113079071,
0.007326469291001558,
-0.015286010690033436,
-0.13404949009418488,
0.04915459826588631,
-0.026068631559610367,
-0.011233176104724407,
0.03507913649082184,
0.061965376138687134,
-0.06657350808382034,
0.01880638487637043,
-0.... |
d7533588-1d59-4d92-a562-554a00063fb8 | description: 'Selects the first encountered value of a column.'
sidebar_position: 102
slug: /sql-reference/aggregate-functions/reference/any
title: 'any'
doc_type: 'reference'
any
Selects the first encountered value of a column.
:::warning
As a query can be executed in arbitrary order, the result of this function is non-deterministic.
If you need an arbitrary but deterministic result, use functions
min
or
max
.
:::
By default, the function never returns NULL, i.e. ignores NULL values in the input column.
However, if the function is used with the
RESPECT NULLS
modifier, it returns the first value reads no matter if NULL or not.
Syntax
sql
any(column) [RESPECT NULLS]
Aliases
any(column)
(without
RESPECT NULLS
)
-
any_value
-
first_value
.
Alias for
any(column) RESPECT NULLS
-
anyRespectNulls
,
any_respect_nulls
-
firstValueRespectNulls
,
first_value_respect_nulls
-
anyValueRespectNulls
,
any_value_respect_nulls
Parameters
-
column
: The column name.
Returned value
The first value encountered.
:::note
The return type of the function is the same as the input, except for LowCardinality which is discarded.
This means that given no rows as input it will return the default value of that type (0 for integers, or Null for a Nullable() column).
You might use the
-OrNull
combinator
) to modify this behaviour.
:::
Implementation details
In some cases, you can rely on the order of execution.
This applies to cases when
SELECT
comes from a subquery that uses
ORDER BY
.
When a
SELECT
query has the
GROUP BY
clause or at least one aggregate function, ClickHouse (in contrast to MySQL) requires that all expressions in the
SELECT
,
HAVING
, and
ORDER BY
clauses be calculated from keys or from aggregate functions.
In other words, each column selected from the table must be used either in keys or inside aggregate functions.
To get behavior like in MySQL, you can put the other columns in the
any
aggregate function.
Example
Query:
```sql
CREATE TABLE tab (city Nullable(String)) ENGINE=Memory;
INSERT INTO tab (city) VALUES (NULL), ('Amsterdam'), ('New York'), ('Tokyo'), ('Valencia'), (NULL);
SELECT any(city), anyRespectNulls(city) FROM tab;
```
response
ββany(city)ββ¬βanyRespectNulls(city)ββ
β Amsterdam β α΄Ία΅α΄Έα΄Έ β
βββββββββββββ΄ββββββββββββββββββββββββ | {"source_file": "any.md"} | [
-0.00964006781578064,
-0.0035943251568824053,
-0.02271316945552826,
0.031792402267456055,
-0.04503073915839195,
0.0009679798968136311,
0.0423256941139698,
0.04810962826013565,
0.02124776504933834,
0.03324991464614868,
0.06732238829135895,
-0.04184512048959732,
0.0279457475990057,
-0.091403... |
1380e6b4-351e-4c09-8f57-6c5d4a0fe24d | description: 'Returns the sample covariance matrix over N variables.'
sidebar_position: 125
slug: /sql-reference/aggregate-functions/reference/covarsampmatrix
title: 'covarSampMatrix'
doc_type: 'reference'
covarSampMatrix
Returns the sample covariance matrix over N variables.
Syntax
sql
covarSampMatrix(x[, ...])
Arguments
x
β a variable number of parameters.
(U)Int*
,
Float*
,
Decimal
.
Returned Value
Sample covariance matrix.
Array
(
Array
(
Float64
)).
Example
Query:
sql
DROP TABLE IF EXISTS test;
CREATE TABLE test
(
a UInt32,
b Float64,
c Float64,
d Float64
)
ENGINE = Memory;
INSERT INTO test(a, b, c, d) VALUES (1, 5.6, -4.4, 2.6), (2, -9.6, 3, 3.3), (3, -1.3, -4, 1.2), (4, 5.3, 9.7, 2.3), (5, 4.4, 0.037, 1.222), (6, -8.6, -7.8, 2.1233), (7, 5.1, 9.3, 8.1222), (8, 7.9, -3.6, 9.837), (9, -8.2, 0.62, 8.43555), (10, -3, 7.3, 6.762);
sql
SELECT arrayMap(x -> round(x, 3), arrayJoin(covarSampMatrix(a, b, c, d))) AS covarSampMatrix
FROM test;
Result:
reference
ββcovarSampMatrixββββββββββββββ
1. β [9.167,-1.956,4.534,7.498] β
2. β [-1.956,45.634,7.206,2.369] β
3. β [4.534,7.206,38.011,5.283] β
4. β [7.498,2.369,5.283,11.034] β
βββββββββββββββββββββββββββββββ | {"source_file": "covarsampmatrix.md"} | [
0.021804623305797577,
-0.00886972714215517,
-0.08329237252473831,
0.025271914899349213,
-0.04889749363064766,
-0.028407590463757515,
0.0570426769554615,
-0.01879201829433441,
-0.04672694951295853,
0.021160097792744637,
0.048391684889793396,
-0.03078167326748371,
0.01445781160145998,
-0.083... |
f9edf6ea-359c-4f15-81c1-02d13c962297 | description: 'Aggregate function that calculates PromQL-like irate over time series data on the specified grid.'
sidebar_position: 223
slug: /sql-reference/aggregate-functions/reference/timeSeriesInstantRateToGrid
title: 'timeSeriesInstantRateToGrid'
doc_type: 'reference'
Aggregate function that takes time series data as pairs of timestamps and values and calculates
PromQL-like irate
from this data on a regular time grid described by start timestamp, end timestamp and step. For each point on the grid the samples for calculating
irate
are considered within the specified time window.
Parameters:
-
start timestamp
- Specifies start of the grid.
-
end timestamp
- Specifies end of the grid.
-
grid step
- Specifies step of the grid in seconds.
-
staleness
- Specifies the maximum "staleness" in seconds of the considered samples. The staleness window is a left-open and right-closed interval.
Arguments:
-
timestamp
- timestamp of the sample
-
value
- value of the time series corresponding to the
timestamp
Return value:
irate
values on the specified grid as an
Array(Nullable(Float64))
. The returned array contains one value for each time grid point. The value is NULL if there are not enough samples within the window to calculate the instant rate value for a particular grid point.
Example:
The following query calculates
irate
values on the grid [90, 105, 120, 135, 150, 165, 180, 195, 210]:
sql
WITH
-- NOTE: the gap between 140 and 190 is to show how values are filled for ts = 150, 165, 180 according to window parameter
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values, -- array of values corresponding to timestamps above
90 AS start_ts, -- start of timestamp grid
90 + 120 AS end_ts, -- end of timestamp grid
15 AS step_seconds, -- step of timestamp grid
45 AS window_seconds -- "staleness" window
SELECT timeSeriesInstantRateToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)
FROM
(
-- This subquery converts arrays of timestamps and values into rows of `timestamp`, `value`
SELECT
arrayJoin(arrayZip(timestamps, values)) AS ts_and_val,
ts_and_val.1 AS timestamp,
ts_and_val.2 AS value
);
Response:
response
ββtimeSeriesInstantRaβ―timestamps, values)ββ
1. β [NULL,NULL,0,0.2,0.1,0.1,NULL,NULL,0.3] β
βββββββββββββββββββββββββββββββββββββββββββ
Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments:
sql
WITH
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values,
90 AS start_ts,
90 + 120 AS end_ts,
15 AS step_seconds,
45 AS window_seconds
SELECT timeSeriesInstantRateToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamps, values); | {"source_file": "timeSeriesInstantRateToGrid.md"} | [
-0.06053175404667854,
0.019316812977194786,
-0.13510777056217194,
0.048816680908203125,
-0.006379064172506332,
-0.04618095979094505,
0.043286241590976715,
0.06883393228054047,
0.05217159911990166,
0.0052466364577412605,
-0.02594233863055706,
-0.039088621735572815,
0.003449457697570324,
-0.... |
a72d8ddb-3f95-4fd0-94bd-9698cd5738de | :::note
This function is experimental, enable it by setting
allow_experimental_ts_to_grid_aggregate_function=true
.
::: | {"source_file": "timeSeriesInstantRateToGrid.md"} | [
-0.04617350548505783,
-0.037194281816482544,
-0.013469654135406017,
0.09186480939388275,
0.022367699071764946,
0.004247533623129129,
0.002288512885570526,
-0.07123781740665436,
0.0007586118299514055,
0.07126599550247192,
-0.006106517277657986,
-0.012565995566546917,
-0.00505802920088172,
-... |
4350d593-540d-4966-ac7e-4a754db5b3ee | description: 'Creates an array of the last argument values.'
sidebar_position: 142
slug: /sql-reference/aggregate-functions/reference/grouparraylast
title: 'groupArrayLast'
doc_type: 'reference'
groupArrayLast
Syntax:
groupArrayLast(max_size)(x)
Creates an array of the last argument values.
For example,
groupArrayLast(1)(x)
is equivalent to
[anyLast (x)]
.
In some cases, you can still rely on the order of execution. This applies to cases when
SELECT
comes from a subquery that uses
ORDER BY
if the subquery result is small enough.
Example
Query:
sql
SELECT groupArrayLast(2)(number+1) numbers FROM numbers(10)
Result:
text
ββnumbersββ
β [9,10] β
βββββββββββ
In compare to
groupArray
:
sql
SELECT groupArray(2)(number+1) numbers FROM numbers(10)
text
ββnumbersββ
β [1,2] β
βββββββββββ | {"source_file": "grouparraylast.md"} | [
0.008255413733422756,
0.031250301748514175,
0.031296443194150925,
-0.000969285611063242,
-0.05785829573869705,
-0.02867140807211399,
0.007680272683501244,
0.0373331643640995,
0.05897735059261322,
-0.02105502039194107,
-0.033465851098299026,
0.14988075196743011,
0.00647264439612627,
-0.0736... |
37e235bf-7bcd-47f9-a214-510c4189bad3 | description: 'The aggregate function
singleValueOrNull
is used to implement subquery
operators, such as
x = ALL (SELECT ...)
. It checks if there is only one unique
non-NULL value in the data.'
sidebar_position: 184
slug: /sql-reference/aggregate-functions/reference/singlevalueornull
title: 'singleValueOrNull'
doc_type: 'reference'
singleValueOrNull
The aggregate function
singleValueOrNull
is used to implement subquery operators, such as
x = ALL (SELECT ...)
. It checks if there is only one unique non-NULL value in the data.
If there is only one unique value, it returns it. If there are zero or at least two distinct values, it returns NULL.
Syntax
sql
singleValueOrNull(x)
Parameters
x
β Column of any
data type
(except
Map
,
Array
or
Tuple
which cannot be of type
Nullable
).
Returned values
The unique value, if there is only one unique non-NULL value in
x
.
NULL
, if there are zero or at least two distinct values.
Examples
Query:
sql
CREATE TABLE test (x UInt8 NULL) ENGINE=Log;
INSERT INTO test (x) VALUES (NULL), (NULL), (5), (NULL), (NULL);
SELECT singleValueOrNull(x) FROM test;
Result:
response
ββsingleValueOrNull(x)ββ
β 5 β
ββββββββββββββββββββββββ
Query:
sql
INSERT INTO test (x) VALUES (10);
SELECT singleValueOrNull(x) FROM test;
Result:
response
ββsingleValueOrNull(x)ββ
β α΄Ία΅α΄Έα΄Έ β
ββββββββββββββββββββββββ | {"source_file": "singlevalueornull.md"} | [
0.01358381099998951,
0.009724230505526066,
0.020755602046847343,
-0.005870559718459845,
-0.08209377527236938,
0.003172901924699545,
0.047264281660318375,
0.03883053734898567,
0.014983909204602242,
-0.02787274867296219,
0.047644004225730896,
-0.0747571736574173,
0.08801870793104172,
-0.1325... |
a2f0d27d-ee72-42ad-81ee-b629a86ace0e | description: 'The
theilsU
function calculates Theils'' U uncertainty coefficient,
a value that measures the association between two columns in a table.'
sidebar_position: 201
slug: /sql-reference/aggregate-functions/reference/theilsu
title: 'theilsU'
doc_type: 'reference'
theilsU
The
theilsU
function calculates the
Theil's U uncertainty coefficient
, a value that measures the association between two columns in a table. Its values range from β1.0 (100% negative association, or perfect inversion) to +1.0 (100% positive association, or perfect agreement). A value of 0.0 indicates the absence of association.
Syntax
sql
theilsU(column1, column2)
Arguments
column1
and
column2
are the columns to be compared
Returned value
a value between -1 and 1
Return type
is always
Float64
.
Example
The following two columns being compared below have a small association with each other, so the value of
theilsU
is negative:
sql
SELECT
theilsU(a, b)
FROM
(
SELECT
number % 10 AS a,
number % 4 AS b
FROM
numbers(150)
);
Result:
response
βββββββββtheilsU(a, b)ββ
β -0.30195720557678846 β
ββββββββββββββββββββββββ | {"source_file": "theilsu.md"} | [
-0.020878814160823822,
-0.0013982170494273305,
-0.08803881704807281,
0.02040642872452736,
-0.07903474569320679,
-0.053505171090364456,
0.007591625209897757,
0.052386123687028885,
-0.004136613104492426,
-0.0436803475022316,
0.07824808359146118,
-0.09644225239753723,
0.10168316215276718,
-0.... |
c9fa58bf-42be-4fbd-83ab-72166ec5a147 | description: 'The result of the
cramersV
function ranges from 0 (corresponding to
no association between the variables) to 1 and can reach 1 only when each value
is completely determined by the other. It may be viewed as the association between
two variables as a percentage of their maximum possible variation.'
sidebar_position: 127
slug: /sql-reference/aggregate-functions/reference/cramersv
title: 'cramersV'
doc_type: 'reference'
cramersV
Cramer's V
(sometimes referred to as Cramer's phi) is a measure of association between two columns in a table. The result of the
cramersV
function ranges from 0 (corresponding to no association between the variables) to 1 and can reach 1 only when each value is completely determined by the other. It may be viewed as the association between two variables as a percentage of their maximum possible variation.
:::note
For a bias corrected version of Cramer's V see:
cramersVBiasCorrected
:::
Syntax
sql
cramersV(column1, column2)
Parameters
column1
: first column to be compared.
column2
: second column to be compared.
Returned value
a value between 0 (corresponding to no association between the columns' values) to 1 (complete association).
Type: always
Float64
.
Example
The following two columns being compared below have no association with each other, so the result of
cramersV
is 0:
Query:
sql
SELECT
cramersV(a, b)
FROM
(
SELECT
number % 3 AS a,
number % 5 AS b
FROM
numbers(150)
);
Result:
response
ββcramersV(a, b)ββ
β 0 β
ββββββββββββββββββ
The following two columns below have a fairly close association, so the result of
cramersV
is a high value:
sql
SELECT
cramersV(a, b)
FROM
(
SELECT
number % 10 AS a,
if(number % 12 = 0, (number + 1) % 5, number % 5) AS b
FROM
numbers(150)
);
Result:
response
ββββββcramersV(a, b)ββ
β 0.9066801892162646 β
ββββββββββββββββββββββ | {"source_file": "cramersv.md"} | [
0.014992996118962765,
-0.0598301887512207,
-0.12568341195583344,
0.019290927797555923,
0.005821262951940298,
0.0534680150449276,
0.001782165956683457,
0.04661202430725098,
-0.04243621975183487,
0.008016841486096382,
-0.023984218016266823,
-0.008113726042211056,
0.029152482748031616,
0.0169... |
18ec9d11-46e0-429f-878c-bd524036a123 | description: 'Selects the last encountered value, similar to
anyLast
, but could
accept NULL.'
sidebar_position: 160
slug: /sql-reference/aggregate-functions/reference/last_value
title: 'last_value'
doc_type: 'reference'
last_value
Selects the last encountered value, similar to
anyLast
, but could accept NULL.
Mostly it should be used with
Window Functions
.
Without Window Functions the result will be random if the source stream is not ordered.
examples {#examples}
```sql
CREATE TABLE test_data
(
a Int64,
b Nullable(Int64)
)
ENGINE = Memory;
INSERT INTO test_data (a, b) VALUES (1,null), (2,3), (4, 5), (6,null)
```
Example 1 {#example1}
The NULL value is ignored at default.
sql
SELECT last_value(b) FROM test_data
text
ββlast_value_ignore_nulls(b)ββ
β 5 β
ββββββββββββββββββββββββββββββ
Example 2 {#example2}
The NULL value is ignored.
sql
SELECT last_value(b) ignore nulls FROM test_data
text
ββlast_value_ignore_nulls(b)ββ
β 5 β
ββββββββββββββββββββββββββββββ
Example 3 {#example3}
The NULL value is accepted.
sql
SELECT last_value(b) respect nulls FROM test_data
text
ββlast_value_respect_nulls(b)ββ
β α΄Ία΅α΄Έα΄Έ β
βββββββββββββββββββββββββββββββ
Example 4 {#example4}
Stabilized result using the sub-query with
ORDER BY
.
sql
SELECT
last_value_respect_nulls(b),
last_value(b)
FROM
(
SELECT *
FROM test_data
ORDER BY a ASC
)
text
ββlast_value_respect_nulls(b)ββ¬βlast_value(b)ββ
β α΄Ία΅α΄Έα΄Έ β 5 β
βββββββββββββββββββββββββββββββ΄ββββββββββββββββ | {"source_file": "last_value.md"} | [
-0.018264170736074448,
0.040359016507864,
0.013237888924777508,
0.026895929127931595,
-0.021418364718556404,
0.038726478815078735,
0.014242758974432945,
0.018881358206272125,
0.006185146514326334,
0.028515566140413284,
0.02137397602200508,
-0.035825591534376144,
0.029704829677939415,
-0.10... |
87279782-0484-404f-98e6-83b5b2ba03c8 | description: 'With the determined precision computes the quantile of a numeric data
sequence.'
sidebar_position: 180
slug: /sql-reference/aggregate-functions/reference/quantiletiming
title: 'quantileTiming'
doc_type: 'reference'
quantileTiming
With the determined precision computes the
quantile
of a numeric data sequence.
The result is deterministic (it does not depend on the query processing order). The function is optimized for working with sequences which describe distributions like loading web pages times or backend response times.
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
Syntax
sql
quantileTiming(level)(expr)
Alias:
medianTiming
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β
Expression
over a column values returning a
Float*
-type number.
If negative values are passed to the function, the behavior is undefined.
If the value is greater than 30,000 (a page loading time of more than 30 seconds), it is assumed to be 30,000.
Accuracy
The calculation is accurate if:
Total number of values does not exceed 5670.
Total number of values exceeds 5670, but the page loading time is less than 1024ms.
Otherwise, the result of the calculation is rounded to the nearest multiple of 16 ms.
:::note
For calculating page loading time quantiles, this function is more effective and accurate than
quantile
.
:::
Returned value
Quantile of the specified level.
Type:
Float32
.
:::note
If no values are passed to the function (when using
quantileTimingIf
),
NaN
is returned. The purpose of this is to differentiate these cases from cases that result in zero. See
ORDER BY clause
for notes on sorting
NaN
values.
:::
Example
Input table:
text
ββresponse_timeββ
β 72 β
β 112 β
β 126 β
β 145 β
β 104 β
β 242 β
β 313 β
β 168 β
β 108 β
βββββββββββββββββ
Query:
sql
SELECT quantileTiming(response_time) FROM t
Result:
text
ββquantileTiming(response_time)ββ
β 126 β
βββββββββββββββββββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantiletiming.md"} | [
-0.05667293816804886,
-0.0026105011347681284,
0.01837890036404133,
0.017377063632011414,
-0.11019139736890793,
-0.07460471987724304,
0.0013701367424800992,
0.07866253703832626,
0.0028651102911680937,
0.00821702741086483,
-0.0006566020892933011,
-0.0888800173997879,
0.021485362201929092,
-0... |
159d5b82-7df7-438c-a0e7-05701561928b | description: 'Bitmap or Aggregate calculations from a unsigned integer column, return
cardinality of type UInt64, if add suffix -State, then return a bitmap object'
sidebar_position: 148
slug: /sql-reference/aggregate-functions/reference/groupbitmap
title: 'groupBitmap'
doc_type: 'reference'
groupBitmap
Bitmap or Aggregate calculations from a unsigned integer column, return cardinality of type UInt64, if add suffix -State, then return
bitmap object
.
sql
groupBitmap(expr)
Arguments
expr
β An expression that results in
UInt*
type.
Return value
Value of the
UInt64
type.
Example
Test data:
text
UserID
1
1
2
3
Query:
sql
SELECT groupBitmap(UserID) AS num FROM t
Result:
text
num
3 | {"source_file": "groupbitmap.md"} | [
0.035628657788038254,
0.10637380182743073,
-0.01938793435692787,
0.04958935081958771,
-0.12214002758264542,
0.015148957259953022,
0.04465058445930481,
0.014688168652355671,
-0.06170973181724548,
-0.03527972474694252,
0.02265886403620243,
-0.08913332968950272,
0.061562880873680115,
-0.05782... |
3135e8f9-ea9a-4da9-81d7-00f24317801e | description: 'Calculates the minimum from
value
array according to the keys specified
in the
key
array.'
sidebar_position: 169
slug: /sql-reference/aggregate-functions/reference/minmap
title: 'minMap'
doc_type: 'reference'
minMap
Calculates the minimum from
value
array according to the keys specified in the
key
array.
Syntax
sql
`minMap(key, value)`
or
sql
minMap(Tuple(key, value))
Alias:
minMappedArrays
:::note
- Passing a tuple of keys and value arrays is identical to passing an array of keys and an array of values.
- The number of elements in
key
and
value
must be the same for each row that is totaled.
:::
Parameters
key
β Array of keys.
Array
.
value
β Array of values.
Array
.
Returned value
Returns a tuple of two arrays: keys in sorted order, and values calculated for the corresponding keys.
Tuple
(
Array
,
Array
).
Example
Query:
sql
SELECT minMap(a, b)
FROM VALUES('a Array(Int32), b Array(Int64)', ([1, 2], [2, 2]), ([2, 3], [1, 1]))
Result:
text
ββminMap(a, b)βββββββ
β ([1,2,3],[2,1,1]) β
βββββββββββββββββββββ | {"source_file": "minmap.md"} | [
0.07127900421619415,
0.06953177601099014,
-0.014546315185725689,
-0.022006725892424583,
-0.09212352335453033,
0.0031081160996109247,
0.0708337277173996,
0.07195337116718292,
-0.034124620258808136,
-0.0037202653475105762,
0.012573196552693844,
-0.052080899477005005,
0.11634216457605362,
-0.... |
5576ca2d-9c74-46de-b7a2-1a9537574f49 | description: 'Returns the exponentially smoothed weighted moving average of values
of a time series at point
t
in time.'
sidebar_position: 133
slug: /sql-reference/aggregate-functions/reference/exponentialTimeDecayedAvg
title: 'exponentialTimeDecayedAvg'
doc_type: 'reference'
exponentialTimeDecayedAvg {#exponentialtimedecayedavg}
Returns the exponentially smoothed weighted moving average of values of a time series at point
t
in time.
Syntax
sql
exponentialTimeDecayedAvg(x)(v, t)
Arguments
v
β Value.
Integer
,
Float
or
Decimal
.
t
β Time.
Integer
,
Float
or
Decimal
,
DateTime
,
DateTime64
.
Parameters
x
β Half-life period.
Integer
,
Float
or
Decimal
.
Returned values
Returns an exponentially smoothed weighted moving average at index
t
in time.
Float64
.
Examples
Query:
sql
SELECT
value,
time,
round(exp_smooth, 3),
bar(exp_smooth, 0, 5, 50) AS bar
FROM
(
SELECT
(number = 0) OR (number >= 25) AS value,
number AS time,
exponentialTimeDecayedAvg(10)(value, time) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS exp_smooth
FROM numbers(50)
);
Response: | {"source_file": "exponentialtimedecayedavg.md"} | [
-0.10797903686761856,
-0.04691558703780174,
-0.01697118580341339,
0.061787236481904984,
-0.05396736413240433,
-0.10006655007600784,
0.09778919816017151,
0.03697466105222702,
0.03626035898923874,
-0.004059751518070698,
0.03989621624350548,
-0.10031181573867798,
0.04383737966418266,
-0.03941... |
787c48e7-24d1-42cc-b18c-c7494471c1d8 | Response:
sql
ββvalueββ¬βtimeββ¬βround(exp_smooth, 3)ββ¬βbarβββββββββ
1. β 1 β 0 β 1 β ββββββββββ β
2. β 0 β 1 β 0.475 β βββββ β
3. β 0 β 2 β 0.301 β βββ β
4. β 0 β 3 β 0.214 β βββ β
5. β 0 β 4 β 0.162 β ββ β
6. β 0 β 5 β 0.128 β ββ β
7. β 0 β 6 β 0.104 β β β
8. β 0 β 7 β 0.086 β β β
9. β 0 β 8 β 0.072 β β β
0. β 0 β 9 β 0.061 β β β
1. β 0 β 10 β 0.052 β β β
2. β 0 β 11 β 0.045 β β β
3. β 0 β 12 β 0.039 β β β
4. β 0 β 13 β 0.034 β β β
5. β 0 β 14 β 0.03 β β β
6. β 0 β 15 β 0.027 β β β
7. β 0 β 16 β 0.024 β β β
8. β 0 β 17 β 0.021 β β β
9. β 0 β 18 β 0.018 β β β
0. β 0 β 19 β 0.016 β β β
1. β 0 β 20 β 0.015 β β β
2. β 0 β 21 β 0.013 β β β
3. β 0 β 22 β 0.012 β β
4. β 0 β 23 β 0.01 β β
5. β 0 β 24 β 0.009 β β
6. β 1 β 25 β 0.111 β β β
7. β 1 β 26 β 0.202 β ββ β
8. β 1 β 27 β 0.283 β βββ β
9. β 1 β 28 β 0.355 β ββββ β
0. β 1 β 29 β 0.42 β βββββ β
1. β 1 β 30 β 0.477 β βββββ β
2. β 1 β 31 β 0.529 β ββββββ β
3. β 1 β 32 β 0.576 β ββββββ β
4. β 1 β 33 β 0.618 β βββββββ β
5. β 1 β 34 β 0.655 β βββββββ β
6. β 1 β 35 β 0.689 β βββββββ β
7. β 1 β 36 β 0.719 β ββββββββ β
8. β 1 β 37 β 0.747 β ββββββββ β
9. β 1 β 38 β 0.771 β ββββββββ β
0. β 1 β 39 β 0.793 β ββββββββ β
1. β 1 β 40 β 0.813 β βββββββββ β
2. β 1 β 41 β 0.831 β βββββββββ β
3. β 1 β 42 β 0.848 β βββββββββ β
4. β 1 β 43 β 0.862 β βββββββββ β
5. β 1 β 44 β 0.876 β βββββββββ β
6. β 1 β 45 β 0.888 β βββββββββ β
7. β 1 β 46 β 0.898 β βββββββββ β
8. β 1 β 47 β 0.908 β βββββββββ β
9. β 1 β 48 β 0.917 β ββββββββββ β
0. β 1 β 49 β 0.925 β ββββββββββ β
βββββββββ΄βββββββ΄βββββββββββββββββββββββ΄βββββββββββββ | {"source_file": "exponentialtimedecayedavg.md"} | [
-0.030177047476172447,
-0.05252765864133835,
-0.02834406867623329,
0.04259036108851433,
-0.046888478100299835,
-0.10587906837463379,
0.08679560571908951,
-0.028901811689138412,
-0.061887551099061966,
0.07137588411569595,
0.07147925347089767,
-0.08400993794202805,
0.0042009237222373486,
-0.... |
8d4014ad-7558-486f-9270-073f1856934b | description: 'Aggregate function that calculates PromQL-like idelta over time series data on the specified grid.'
sidebar_position: 222
slug: /sql-reference/aggregate-functions/reference/timeSeriesInstantDeltaToGrid
title: 'timeSeriesInstantDeltaToGrid'
doc_type: 'reference'
Aggregate function that takes time series data as pairs of timestamps and values and calculates
PromQL-like idelta
from this data on a regular time grid described by start timestamp, end timestamp and step. For each point on the grid the samples for calculating
idelta
are considered within the specified time window.
Parameters:
-
start timestamp
- Specifies start of the grid.
-
end timestamp
- Specifies end of the grid.
-
grid step
- Specifies step of the grid in seconds.
-
staleness
- Specifies the maximum "staleness" in seconds of the considered samples. The staleness window is a left-open and right-closed interval.
Arguments:
-
timestamp
- timestamp of the sample
-
value
- value of the time series corresponding to the
timestamp
Return value:
idelta
values on the specified grid as an
Array(Nullable(Float64))
. The returned array contains one value for each time grid point. The value is NULL if there are not enough samples within the window to calculate the instant delta value for a particular grid point.
Example:
The following query calculates
idelta
values on the grid [90, 105, 120, 135, 150, 165, 180, 195, 210]:
sql
WITH
-- NOTE: the gap between 140 and 190 is to show how values are filled for ts = 150, 165, 180 according to window parameter
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values, -- array of values corresponding to timestamps above
90 AS start_ts, -- start of timestamp grid
90 + 120 AS end_ts, -- end of timestamp grid
15 AS step_seconds, -- step of timestamp grid
45 AS window_seconds -- "staleness" window
SELECT timeSeriesInstantDeltaToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)
FROM
(
-- This subquery converts arrays of timestamps and values into rows of `timestamp`, `value`
SELECT
arrayJoin(arrayZip(timestamps, values)) AS ts_and_val,
ts_and_val.1 AS timestamp,
ts_and_val.2 AS value
);
Response:
response
ββtimeSeriesInstaβ―stamps, values)ββ
1. β [NULL,NULL,0,2,1,1,NULL,NULL,3] β
βββββββββββββββββββββββββββββββββββ
Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments:
sql
WITH
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values,
90 AS start_ts,
90 + 120 AS end_ts,
15 AS step_seconds,
45 AS window_seconds
SELECT timeSeriesInstantDeltaToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamps, values); | {"source_file": "timeSeriesInstantDeltaToGrid.md"} | [
-0.08710699528455734,
-0.014406213536858559,
-0.09601826965808868,
0.05012832209467888,
-0.04949557036161423,
-0.03963586688041687,
0.017825495451688766,
0.0840180367231369,
0.04921916127204895,
0.020779622718691826,
-0.013484800234436989,
-0.05350353941321373,
0.0010546682169660926,
-0.01... |
5af55d7b-ee02-49f3-b0c4-33a238ddf516 | :::note
This function is experimental, enable it by setting
allow_experimental_ts_to_grid_aggregate_function=true
.
::: | {"source_file": "timeSeriesInstantDeltaToGrid.md"} | [
-0.04617350548505783,
-0.037194281816482544,
-0.013469654135406017,
0.09186480939388275,
0.022367699071764946,
0.004247533623129129,
0.002288512885570526,
-0.07123781740665436,
0.0007586118299514055,
0.07126599550247192,
-0.006106517277657986,
-0.012565995566546917,
-0.00505802920088172,
-... |
d8ea1acc-8a99-4f9e-8b11-8416c1c3ce6c | description: 'Computes the skewness of a sequence.'
sidebar_position: 185
slug: /sql-reference/aggregate-functions/reference/skewpop
title: 'skewPop'
doc_type: 'reference'
skewPop
Computes the
skewness
of a sequence.
sql
skewPop(expr)
Arguments
expr
β
Expression
returning a number.
Returned value
The skewness of the given distribution. Type β
Float64
Example
sql
SELECT skewPop(value) FROM series_with_value_column; | {"source_file": "skewpop.md"} | [
-0.09742235392332077,
0.011957000941038132,
-0.047984641045331955,
0.006786075886338949,
-0.03981994464993477,
-0.013179996982216835,
0.009067421779036522,
0.051077667623758316,
0.04804566502571106,
-0.04379099979996681,
0.07552586495876312,
-0.00250240508466959,
-0.007751955650746822,
-0.... |
06e66c9e-b82e-4c27-adc7-c032c2ad082d | description: 'Applies the Mann-Whitney rank test to samples from two populations.'
sidebar_label: 'mannWhitneyUTest'
sidebar_position: 161
slug: /sql-reference/aggregate-functions/reference/mannwhitneyutest
title: 'mannWhitneyUTest'
doc_type: 'reference'
mannWhitneyUTest
Applies the Mann-Whitney rank test to samples from two populations.
Syntax
sql
mannWhitneyUTest[(alternative[, continuity_correction])](sample_data, sample_index)
Values of both samples are in the
sample_data
column. If
sample_index
equals to 0 then the value in that row belongs to the sample from the first population. Otherwise it belongs to the sample from the second population.
The null hypothesis is that two populations are stochastically equal. Also one-sided hypothesises can be tested. This test does not assume that data have normal distribution.
Arguments
sample_data
β sample data.
Integer
,
Float
or
Decimal
.
sample_index
β sample index.
Integer
.
Parameters
alternative
β alternative hypothesis. (Optional, default:
'two-sided'
.)
String
.
'two-sided'
;
'greater'
;
'less'
.
continuity_correction
β if not 0 then continuity correction in the normal approximation for the p-value is applied. (Optional, default: 1.)
UInt64
.
Returned values
Tuple
with two elements:
calculated U-statistic.
Float64
.
calculated p-value.
Float64
.
Example
Input table:
text
ββsample_dataββ¬βsample_indexββ
β 10 β 0 β
β 11 β 0 β
β 12 β 0 β
β 1 β 1 β
β 2 β 1 β
β 3 β 1 β
βββββββββββββββ΄βββββββββββββββ
Query:
sql
SELECT mannWhitneyUTest('greater')(sample_data, sample_index) FROM mww_ttest;
Result:
text
ββmannWhitneyUTest('greater')(sample_data, sample_index)ββ
β (9,0.04042779918503192) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also
MannβWhitney U test
Stochastic ordering | {"source_file": "mannwhitneyutest.md"} | [
0.052364494651556015,
0.019673040136694908,
0.010414870455861092,
0.04787549376487732,
-0.0059816245920956135,
0.017966710031032562,
-0.013941575773060322,
0.029635854065418243,
-0.10080848634243011,
0.021098028868436813,
0.056691113859415054,
-0.11376750469207764,
0.0772872120141983,
-0.0... |
65f9be11-6ef1-4265-9858-f20996e5a6f3 | description: 'Computes the quantile of a numeric data sequence using the Greenwald-Khanna
algorithm.'
sidebar_position: 175
slug: /sql-reference/aggregate-functions/reference/quantileGK
title: 'quantileGK'
doc_type: 'reference'
quantileGK
Computes the
quantile
of a numeric data sequence using the
Greenwald-Khanna
algorithm. The Greenwald-Khanna algorithm is an algorithm used to compute quantiles on a stream of data in a highly efficient manner. It was introduced by Michael Greenwald and Sanjeev Khanna in 2001. It is widely used in databases and big data systems where computing accurate quantiles on a large stream of data in real-time is necessary. The algorithm is highly efficient, taking only O(log n) space and O(log log n) time per item (where n is the size of the input). It is also highly accurate, providing an approximate quantile value with high probability.
quantileGK
is different from other quantile functions in ClickHouse, because it enables user to control the accuracy of the approximate quantile result.
Syntax
sql
quantileGK(accuracy, level)(expr)
Alias:
medianGK
.
Arguments
accuracy
β Accuracy of quantile. Constant positive integer. Larger accuracy value means less error. For example, if the accuracy argument is set to 100, the computed quantile will have an error no greater than 1% with high probability. There is a trade-off between the accuracy of the computed quantiles and the computational complexity of the algorithm. A larger accuracy requires more memory and computational resources to compute the quantile accurately, while a smaller accuracy argument allows for a faster and more memory-efficient computation but with a slightly lower accuracy.
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
Returned value
Quantile of the specified level and accuracy.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
```sql
SELECT quantileGK(1, 0.25)(number + 1)
FROM numbers(1000)
ββquantileGK(1, 0.25)(plus(number, 1))ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββββ
SELECT quantileGK(10, 0.25)(number + 1)
FROM numbers(1000)
ββquantileGK(10, 0.25)(plus(number, 1))ββ
β 156 β
βββββββββββββββββββββββββββββββββββββββββ
SELECT quantileGK(100, 0.25)(number + 1)
FROM numbers(1000)
ββquantileGK(100, 0.25)(plus(number, 1))ββ
β 251 β
ββββββββββββββββββββββββββββββββββββββββββ
SELECT quantileGK(1000, 0.25)(number + 1)
FROM numbers(1000) | {"source_file": "quantileGK.md"} | [
-0.1084093227982521,
0.04076549410820007,
-0.0572059266269207,
-0.04715891554951668,
-0.09443962574005127,
-0.11292005330324173,
0.025854505598545074,
0.04293840751051903,
0.02672533690929413,
-0.049133799970149994,
-0.03176618739962578,
-0.00559206260368228,
0.0012259940849617124,
-0.0418... |
5f083021-cfe2-48cc-96a4-e58d2e109af9 | ββquantileGK(100, 0.25)(plus(number, 1))ββ
β 251 β
ββββββββββββββββββββββββββββββββββββββββββ
SELECT quantileGK(1000, 0.25)(number + 1)
FROM numbers(1000)
ββquantileGK(1000, 0.25)(plus(number, 1))ββ
β 249 β
βββββββββββββββββββββββββββββββββββββββββββ
```
See Also
[median]/sql-reference/aggregate-functions/reference/median
quantiles | {"source_file": "quantileGK.md"} | [
-0.0019856838043779135,
0.019457798451185226,
0.025586076080799103,
-0.04976389557123184,
-0.038182370364665985,
-0.0423649437725544,
0.08222802728414536,
0.04794514551758766,
0.0034725407604128122,
0.03759560361504555,
0.029070887714624405,
-0.08270762860774994,
0.053823042660951614,
-0.0... |
6fba06d7-68b3-48ed-9a8f-a066b6559b73 | description: 'Return an intersection of given arrays (Return all items of arrays,
that are in all given arrays).'
sidebar_position: 141
slug: /sql-reference/aggregate-functions/reference/grouparrayintersect
title: 'groupArrayIntersect'
doc_type: 'reference'
groupArrayIntersect
Return an intersection of given arrays (Return all items of arrays, that are in all given arrays).
Syntax
sql
groupArrayIntersect(x)
Arguments
x
β Argument (column name or expression).
Returned values
Array that contains elements that are in all arrays.
Type:
Array
.
Examples
Consider table
numbers
:
text
ββaβββββββββββββββ
β [1,2,4] β
β [1,5,2,8,-1,0] β
β [1,5,7,5,8,2] β
ββββββββββββββββββ
Query with column name as argument:
sql
SELECT groupArrayIntersect(a) AS intersection FROM numbers;
Result:
text
ββintersectionβββββββ
β [1, 2] β
βββββββββββββββββββββ | {"source_file": "grouparrayintersect.md"} | [
0.061398670077323914,
-0.009873652830719948,
-0.013833082281053066,
0.04149113595485687,
-0.004295438062399626,
-0.025697018951177597,
0.13022778928279877,
-0.0592762753367424,
-0.06796469539403915,
-0.03543096408247948,
-0.03636499121785164,
0.02301528863608837,
0.021533114835619926,
-0.0... |
998c6ca0-56ab-425f-af1c-426b7430e7c1 | description: 'Estimates the compression ratio of a given column without compressing
it.'
sidebar_position: 132
slug: /sql-reference/aggregate-functions/reference/estimateCompressionRatio
title: 'estimateCompressionRatio'
doc_type: 'reference'
estimateCompressionRatio {#estimatecompressionration}
Estimates the compression ratio of a given column without compressing it.
Syntax
sql
estimateCompressionRatio(codec, block_size_bytes)(column)
Arguments
column
- Column of any type
Parameters
codec
-
String
containing a
compression codec
or multiple comma-separated codecs in a single string.
block_size_bytes
- Block size of compressed data. This is similar to setting both
max_compress_block_size
and
min_compress_block_size
. The default value is 1 MiB (1048576 bytes).
Both parameters are optional.
Returned values
Returns an estimate compression ratio for the given column.
Type:
Float64
.
Examples
``sql title="Input table"
CREATE TABLE compression_estimate_example
(
number` UInt64
)
ENGINE = MergeTree()
ORDER BY number
SETTINGS min_bytes_for_wide_part = 0;
INSERT INTO compression_estimate_example
SELECT number FROM system.numbers LIMIT 100_000;
```
sql title="Query"
SELECT estimateCompressionRatio(number) AS estimate FROM compression_estimate_example;
text title="Response"
ββββββββββββestimateββ
β 1.9988506608699999 β
ββββββββββββββββββββββ
:::note
The result above will differ based on the default compression codec of the server. See
Column Compression Codecs
.
:::
sql title="Query"
SELECT estimateCompressionRatio('T64')(number) AS estimate FROM compression_estimate_example;
text title="Response"
βββββββββββestimateββ
β 3.762758101688538 β
βββββββββββββββββββββ
The function can also specify multiple codecs:
sql title="Query"
SELECT estimateCompressionRatio('T64, ZSTD')(number) AS estimate FROM compression_estimate_example;
response title="Response"
ββββββββββββestimateββ
β 143.60078980434392 β
ββββββββββββββββββββββ | {"source_file": "estimateCompressionRatio.md"} | [
-0.054772667586803436,
0.03732019290328026,
-0.1108098104596138,
0.05379989743232727,
-0.023301642388105392,
-0.03743823245167732,
-0.039860595017671585,
0.09818064421415329,
-0.05915483832359314,
0.009469196200370789,
-0.03901473805308342,
-0.07491567730903625,
0.05759343132376671,
-0.080... |
9cefc005-0760-4fd1-9348-0c3d2fc55713 | description: 'Creates an array of sample argument values. The size of the resulting
array is limited to
max_size
elements. Argument values are selected and added
to the array randomly.'
sidebar_position: 145
slug: /sql-reference/aggregate-functions/reference/grouparraysample
title: 'groupArraySample'
doc_type: 'reference'
groupArraySample
Creates an array of sample argument values. The size of the resulting array is limited to
max_size
elements. Argument values are selected and added to the array randomly.
Syntax
sql
groupArraySample(max_size[, seed])(x)
Arguments
max_size
β Maximum size of the resulting array.
UInt64
.
seed
β Seed for the random number generator. Optional.
UInt64
. Default value:
123456
.
x
β Argument (column name or expression).
Returned values
Array of randomly selected
x
arguments.
Type:
Array
.
Examples
Consider table
colors
:
text
ββidββ¬βcolorβββ
β 1 β red β
β 2 β blue β
β 3 β green β
β 4 β white β
β 5 β orange β
ββββββ΄βββββββββ
Query with column name as argument:
sql
SELECT groupArraySample(3)(color) as newcolors FROM colors;
Result:
text
ββnewcolorsβββββββββββββββββββ
β ['white','blue','green'] β
ββββββββββββββββββββββββββββββ
Query with column name and different seed:
sql
SELECT groupArraySample(3, 987654321)(color) as newcolors FROM colors;
Result:
text
ββnewcolorsβββββββββββββββββββ
β ['red','orange','green'] β
ββββββββββββββββββββββββββββββ
Query with expression as argument:
sql
SELECT groupArraySample(3)(concat('light-', color)) as newcolors FROM colors;
Result:
text
ββnewcolorsββββββββββββββββββββββββββββββββββββ
β ['light-blue','light-orange','light-green'] β
βββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "grouparraysample.md"} | [
0.023385541513562202,
-0.0016196968499571085,
-0.03656516969203949,
0.06916739791631699,
0.0024041556753218174,
-0.008866490796208382,
0.1016763299703598,
-0.07437092810869217,
-0.008948670700192451,
-0.02324550971388817,
-0.028470028191804886,
0.025250732898712158,
0.04864881560206413,
-0... |
00d3021a-1d10-41d9-8f5a-eeb86a3fab63 | description: 'The result is equal to the square root of varSamp'
sidebar_position: 190
slug: /sql-reference/aggregate-functions/reference/stddevsamp
title: 'stddevSamp'
doc_type: 'reference'
stddevSamp
The result is equal to the square root of
varSamp
.
Alias:
STDDEV_SAMP
.
:::note
This function uses a numerically unstable algorithm. If you need
numerical stability
in calculations, use the
stddevSampStable
function. It works slower but provides a lower computational error.
:::
Syntax
sql
stddevSamp(x)
Parameters
x
: Values for which to find the square root of sample variance.
(U)Int*
,
Float*
,
Decimal*
.
Returned value
Square root of sample variance of
x
.
Float64
.
Example
Query:
```sql
DROP TABLE IF EXISTS test_data;
CREATE TABLE test_data
(
population UInt8,
)
ENGINE = Log;
INSERT INTO test_data VALUES (3),(3),(3),(4),(4),(5),(5),(7),(11),(15);
SELECT
stddevSamp(population)
FROM test_data;
```
Result:
response
ββstddevSamp(population)ββ
β 4 β
ββββββββββββββββββββββββββ | {"source_file": "stddevsamp.md"} | [
-0.009841266088187695,
0.004789599683135748,
0.02090964838862419,
0.019633445888757706,
-0.06539098173379898,
-0.015230325050652027,
0.0436522476375103,
0.12785163521766663,
0.02421584352850914,
-0.003941020462661982,
0.04625684767961502,
-0.0019208879675716162,
0.034389182925224304,
-0.10... |
992cc128-47ca-44d4-bb4e-3538e7d6882a | description: 'Landing page for aggregate functions with complete list of aggregate
functions'
sidebar_position: 36
slug: /sql-reference/aggregate-functions/reference/
title: 'Aggregate Functions'
toc_folder_title: 'Reference'
toc_hidden: true
doc_type: 'landing-page'
Aggregate functions
ClickHouse supports all standard SQL aggregate functions (
sum
,
avg
,
min
,
max
,
count
), as well as a wide range of other aggregate functions. | {"source_file": "index.md"} | [
0.00044325541239231825,
-0.11637332290410995,
-0.01449220534414053,
0.049585770815610886,
-0.0440010167658329,
0.049642760306596756,
-0.01276326458901167,
0.04722421243786812,
-0.033840131014585495,
0.015317766927182674,
-0.007159699220210314,
-0.006110537331551313,
0.0365159809589386,
-0.... |
1dd1a687-1588-4b72-819b-482d1a78b57c | description: 'Computes an approximate quantile of a numeric data sequence.'
sidebar_position: 170
slug: /sql-reference/aggregate-functions/reference/quantile
title: 'quantile'
doc_type: 'reference'
quantile
Computes an approximate
quantile
of a numeric data sequence.
This function applies
reservoir sampling
with a reservoir size up to 8192 and a random number generator for sampling. The result is non-deterministic. To get an exact quantile, use the
quantileExact
function.
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
Note that for an empty numeric sequence,
quantile
will return NaN, but its
quantile*
variants will return either NaN or a default value for the sequence type, depending on the variant.
Syntax
sql
quantile(level)(expr)
Alias:
median
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
Returned value
Approximate quantile of the specified level.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Input table:
text
ββvalββ
β 1 β
β 1 β
β 2 β
β 3 β
βββββββ
Query:
sql
SELECT quantile(val) FROM t
Result:
text
ββquantile(val)ββ
β 1.5 β
βββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantile.md"} | [
-0.06025293841958046,
-0.014578370377421379,
0.024190014228224754,
-0.0018599762115627527,
-0.060115084052085876,
-0.1038094088435173,
0.05484123155474663,
0.07549737393856049,
0.01144056860357523,
-0.021187826991081238,
-0.05515257269144058,
-0.11481669545173645,
0.01337695773690939,
-0.0... |
0bd69d70-4d70-4e5f-8cfd-9018035b1aa8 | description: 'Creates an array of argument values. Values can be added to the array
in any (indeterminate) order.'
sidebar_position: 139
slug: /sql-reference/aggregate-functions/reference/grouparray
title: 'groupArray'
doc_type: 'reference'
groupArray
Syntax:
groupArray(x)
or
groupArray(max_size)(x)
Creates an array of argument values.
Values can be added to the array in any (indeterminate) order.
The second version (with the
max_size
parameter) limits the size of the resulting array to
max_size
elements. For example,
groupArray(1)(x)
is equivalent to
[any (x)]
.
In some cases, you can still rely on the order of execution. This applies to cases when
SELECT
comes from a subquery that uses
ORDER BY
if the subquery result is small enough.
Example
```text
SELECT * FROM default.ck;
ββidββ¬βnameββββββ
β 1 β zhangsan β
β 1 β α΄Ία΅α΄Έα΄Έ β
β 1 β lisi β
β 2 β wangwu β
ββββββ΄βββββββββββ
```
Query:
sql
SELECT id, groupArray(10)(name) FROM default.ck GROUP BY id;
Result:
text
ββidββ¬βgroupArray(10)(name)ββ
β 1 β ['zhangsan','lisi'] β
β 2 β ['wangwu'] β
ββββββ΄βββββββββββββββββββββββ
The groupArray function will remove α΄Ία΅α΄Έα΄Έ value based on the above results.
Alias:
array_agg
. | {"source_file": "grouparray.md"} | [
0.006038210820406675,
0.021637720987200737,
-0.014486867003142834,
0.03267877548933029,
-0.04919998720288277,
-0.007091797422617674,
0.004749896936118603,
0.017719054594635963,
0.043360255658626556,
-0.030583489686250687,
-0.011460074223577976,
0.09620892256498337,
0.029872415587306023,
-0... |
49c2a7bb-4f0c-4bd1-8dd2-0f68832f563d | description: 'Returns the sum of exponentially smoothed moving average values of a
time series at the index
t
in time.'
sidebar_position: 136
slug: /sql-reference/aggregate-functions/reference/exponentialTimeDecayedSum
title: 'exponentialTimeDecayedSum'
doc_type: 'reference'
exponentialTimeDecayedSum {#exponentialtimedecayedsum}
Returns the sum of exponentially smoothed moving average values of a time series at the index
t
in time.
Syntax
sql
exponentialTimeDecayedSum(x)(v, t)
Arguments
v
β Value.
Integer
,
Float
or
Decimal
.
t
β Time.
Integer
,
Float
or
Decimal
,
DateTime
,
DateTime64
.
Parameters
x
β Time difference required for a value's weight to decay to 1/e.
Integer
,
Float
or
Decimal
.
Returned values
Returns the sum of exponentially smoothed moving average values at the given point in time.
Float64
.
Example
Query:
sql
SELECT
value,
time,
round(exp_smooth, 3),
bar(exp_smooth, 0, 10, 50) AS bar
FROM
(
SELECT
(number = 0) OR (number >= 25) AS value,
number AS time,
exponentialTimeDecayedSum(10)(value, time) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS exp_smooth
FROM numbers(50)
);
Result: | {"source_file": "exponentialtimedecayedsum.md"} | [
-0.10303454101085663,
-0.04049541801214218,
-0.004072993062436581,
0.04721788689494133,
-0.03074544109404087,
-0.07659495621919632,
0.12378331273794174,
0.02379786968231201,
0.03419151529669762,
0.0020095992367714643,
0.047009341418743134,
-0.11448642611503601,
0.04756614938378334,
-0.0571... |
13c7e25c-847a-4aeb-86c2-410aefd9616a | response
ββvalueββ¬βtimeββ¬βround(exp_smooth, 3)ββ¬βbarββββββββββββββββββββββββββββββββββββββββββββββββ
1. β 1 β 0 β 1 β βββββ β
2. β 0 β 1 β 0.905 β βββββ β
3. β 0 β 2 β 0.819 β ββββ β
4. β 0 β 3 β 0.741 β ββββ β
5. β 0 β 4 β 0.67 β ββββ β
6. β 0 β 5 β 0.607 β βββ β
7. β 0 β 6 β 0.549 β βββ β
8. β 0 β 7 β 0.497 β βββ β
9. β 0 β 8 β 0.449 β βββ β
10. β 0 β 9 β 0.407 β ββ β
11. β 0 β 10 β 0.368 β ββ β
12. β 0 β 11 β 0.333 β ββ β
13. β 0 β 12 β 0.301 β ββ β
14. β 0 β 13 β 0.273 β ββ β
15. β 0 β 14 β 0.247 β ββ β
16. β 0 β 15 β 0.223 β β β
17. β 0 β 16 β 0.202 β β β
18. β 0 β 17 β 0.183 β β β
19. β 0 β 18 β 0.165 β β β
20. β 0 β 19 β 0.15 β β β
21. β 0 β 20 β 0.135 β β β
22. β 0 β 21 β 0.122 β β β
23. β 0 β 22 β 0.111 β β β
24. β 0 β 23 β 0.1 β β β
25. β 0 β 24 β 0.091 β β β
26. β 1 β 25 β 1.082 β ββββββ β
27. β 1 β 26 β 1.979 β ββββββββββ β
28. β 1 β 27 β 2.791 β ββββββββββββββ β
29. β 1 β 28 β 3.525 β ββββββββββββββββββ β
30. β 1 β 29 β 4.19 β βββββββββββββββββββββ β | {"source_file": "exponentialtimedecayedsum.md"} | [
-0.08743393421173096,
-0.0042401524260640144,
-0.013094599358737469,
0.007450675591826439,
-0.013715323060750961,
-0.12748801708221436,
0.057339176535606384,
-0.044873565435409546,
-0.028813380748033524,
0.034526728093624115,
0.05758565664291382,
-0.05693399906158447,
0.020420949906110764,
... |
78f5ca42-42d7-4b47-9941-f5e609f1a6dc | 29. β 1 β 28 β 3.525 β ββββββββββββββββββ β
30. β 1 β 29 β 4.19 β βββββββββββββββββββββ β
31. β 1 β 30 β 4.791 β ββββββββββββββββββββββββ β
32. β 1 β 31 β 5.335 β βββββββββββββββββββββββββββ β
33. β 1 β 32 β 5.827 β ββββββββββββββββββββββββββββββ β
34. β 1 β 33 β 6.273 β ββββββββββββββββββββββββββββββββ β
35. β 1 β 34 β 6.676 β ββββββββββββββββββββββββββββββββββ β
36. β 1 β 35 β 7.041 β ββββββββββββββββββββββββββββββββββββ β
37. β 1 β 36 β 7.371 β βββββββββββββββββββββββββββββββββββββ β
38. β 1 β 37 β 7.669 β βββββββββββββββββββββββββββββββββββββββ β
39. β 1 β 38 β 7.939 β ββββββββββββββββββββββββββββββββββββββββ β
40. β 1 β 39 β 8.184 β βββββββββββββββββββββββββββββββββββββββββ β
41. β 1 β 40 β 8.405 β ββββββββββββββββββββββββββββββββββββββββββ β
42. β 1 β 41 β 8.605 β βββββββββββββββββββββββββββββββββββββββββββ β
43. β 1 β 42 β 8.786 β ββββββββββββββββββββββββββββββββββββββββββββ β
44. β 1 β 43 β 8.95 β βββββββββββββββββββββββββββββββββββββββββββββ β
45. β 1 β 44 β 9.098 β ββββββββββββββββββββββββββββββββββββββββββββββ β
46. β 1 β 45 β 9.233 β βββββββββββββββββββββββββββββββββββββββββββββββ β
47. β 1 β 46 β 9.354 β βββββββββββββββββββββββββββββββββββββββββββββββ β
48. β 1 β 47 β 9.464 β ββββββββββββββββββββββββββββββββββββββββββββββββ β
49. β 1 β 48 β 9.563 β ββββββββββββββββββββββββββββββββββββββββββββββββ β
50. β 1 β 49 β 9.653 β βββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββ΄βββββββ΄βββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "exponentialtimedecayedsum.md"} | [
0.006419220007956028,
-0.0013311897637322545,
-0.03073887713253498,
-0.03607700765132904,
0.004249317571520805,
-0.05549175664782524,
0.047881923615932465,
-0.03243021294474602,
-0.0590481236577034,
0.1128634512424469,
0.034665729850530624,
-0.018558310344815254,
0.0769602432847023,
-0.014... |
ede20473-96b4-4760-aa52-d88e93832b21 | description: 'Calculates the value of
(P(tag = 1) - P(tag = 0))(log(P(tag = 1)) -
log(P(tag = 0)))
for each category.'
sidebar_position: 115
slug: /sql-reference/aggregate-functions/reference/categoricalinformationvalue
title: 'categoricalInformationValue'
doc_type: 'reference'
Calculates the value of
(P(tag = 1) - P(tag = 0))(log(P(tag = 1)) - log(P(tag = 0)))
for each category.
sql
categoricalInformationValue(category1, category2, ..., tag)
The result indicates how a discrete (categorical) feature
[category1, category2, ...]
contribute to a learning model which predicting the value of
tag
. | {"source_file": "categoricalinformationvalue.md"} | [
0.03774235025048256,
0.024295270442962646,
-0.01492798700928688,
0.0410105399787426,
-0.01297577004879713,
-0.0028623463585972786,
0.06222856789827347,
0.05390365794301033,
0.03597138822078705,
-0.022197507321834564,
0.011195894330739975,
-0.08572209626436234,
0.0028500379994511604,
-0.043... |
05ac6bed-72c3-4d36-879d-c161674b0572 | description: 'Calculates the Pearson correlation coefficient.'
sidebar_position: 117
slug: /sql-reference/aggregate-functions/reference/corr
title: 'corr'
doc_type: 'reference'
corr
Calculates the
Pearson correlation coefficient
:
$$
\frac{\Sigma{(x - \bar{x})(y - \bar{y})}}{\sqrt{\Sigma{(x - \bar{x})^2} * \Sigma{(y - \bar{y})^2}}}
$$
:::note
This function uses a numerically unstable algorithm. If you need
numerical stability
in calculations, use the
corrStable
function. It is slower but provides a more accurate result.
:::
Syntax
sql
corr(x, y)
Arguments
x
β first variable.
(U)Int*
,
Float*
.
y
β second variable.
(U)Int*
,
Float*
.
Returned Value
The Pearson correlation coefficient.
Float64
.
Example
Query:
sql
DROP TABLE IF EXISTS series;
CREATE TABLE series
(
i UInt32,
x_value Float64,
y_value Float64
)
ENGINE = Memory;
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6, -4.4),(2, -9.6, 3),(3, -1.3, -4),(4, 5.3, 9.7),(5, 4.4, 0.037),(6, -8.6, -7.8),(7, 5.1, 9.3),(8, 7.9, -3.6),(9, -8.2, 0.62),(10, -3, 7.3);
sql
SELECT corr(x_value, y_value)
FROM series;
Result:
response
ββcorr(x_value, y_value)ββ
β 0.1730265755453256 β
ββββββββββββββββββββββββββ | {"source_file": "corr.md"} | [
-0.020083293318748474,
-0.095804862678051,
-0.0129557428881526,
0.026442112401127815,
-0.07500258088111877,
-0.022801171988248825,
0.01390058733522892,
0.03564770519733429,
-0.009039437398314476,
0.027896542102098465,
0.02475225180387497,
-0.013715649954974651,
0.02357875183224678,
-0.0518... |
93fb83c2-12f8-449c-b375-fde9223f69ed | description: 'Returns an array of the approximately most frequent values and their
counts in the specified column.'
sidebar_position: 107
slug: /sql-reference/aggregate-functions/reference/approxtopk
title: 'approx_top_k'
doc_type: 'reference'
approx_top_k
Returns an array of the approximately most frequent values and their counts in the specified column. The resulting array is sorted in descending order of approximate frequency of values (not by the values themselves).
sql
approx_top_k(N)(column)
approx_top_k(N, reserved)(column)
This function does not provide a guaranteed result. In certain situations, errors might occur and it might return frequent values that aren't the most frequent values.
We recommend using the
N < 10
value; performance is reduced with large
N
values. Maximum value of
N = 65536
.
Parameters
N
β The number of elements to return. Optional. Default value: 10.
reserved
β Defines, how many cells reserved for values. If uniq(column) > reserved, result of topK function will be approximate. Optional. Default value: N * 3.
Arguments
column
β The value to calculate frequency.
Example
Query:
sql
SELECT approx_top_k(2)(k)
FROM VALUES('k Char, w UInt64', ('y', 1), ('y', 1), ('x', 5), ('y', 1), ('z', 10));
Result:
text
ββapprox_top_k(2)(k)βββββ
β [('y',3,0),('x',1,0)] β
βββββββββββββββββββββββββ
approx_top_count
Is an alias to
approx_top_k
function
See Also
topK
topKWeighted
approx_top_sum | {"source_file": "approxtopk.md"} | [
0.028492962941527367,
0.007934893481433392,
-0.07430361956357956,
0.032636817544698715,
-0.08321544528007507,
-0.014312513172626495,
0.0440170094370842,
0.08677205443382263,
0.003046203637495637,
-0.005191211588680744,
0.014035661704838276,
0.012828212231397629,
0.08676479011774063,
-0.115... |
c3f7ebb0-2e20-4775-ba71-c666cb5e26d0 | description: 'Computes the correlation matrix over N variables.'
sidebar_position: 118
slug: /sql-reference/aggregate-functions/reference/corrmatrix
title: 'corrMatrix'
doc_type: 'reference'
corrMatrix
Computes the correlation matrix over N variables.
Syntax
sql
corrMatrix(x[, ...])
Arguments
x
β a variable number of parameters.
(U)Int8/16/32/64
,
Float*
.
Returned value
Correlation matrix.
Array
(
Array
(
Float64
)).
Example
Query:
sql
DROP TABLE IF EXISTS test;
CREATE TABLE test
(
a UInt32,
b Float64,
c Float64,
d Float64
)
ENGINE = Memory;
INSERT INTO test(a, b, c, d) VALUES (1, 5.6, -4.4, 2.6), (2, -9.6, 3, 3.3), (3, -1.3, -4, 1.2), (4, 5.3, 9.7, 2.3), (5, 4.4, 0.037, 1.222), (6, -8.6, -7.8, 2.1233), (7, 5.1, 9.3, 8.1222), (8, 7.9, -3.6, 9.837), (9, -8.2, 0.62, 8.43555), (10, -3, 7.3, 6.762);
sql
SELECT arrayMap(x -> round(x, 3), arrayJoin(corrMatrix(a, b, c, d))) AS corrMatrix
FROM test;
Result:
response
ββcorrMatrixββββββββββββββ
1. β [1,-0.096,0.243,0.746] β
2. β [-0.096,1,0.173,0.106] β
3. β [0.243,0.173,1,0.258] β
4. β [0.746,0.106,0.258,1] β
ββββββββββββββββββββββββββ | {"source_file": "corrmatrix.md"} | [
0.006173374596983194,
-0.030527165159583092,
-0.10094638168811798,
0.04924479126930237,
-0.08989378809928894,
-0.015153445303440094,
-0.017777668312191963,
-0.04961032420396805,
-0.038603950291872025,
0.020920943468809128,
0.04860888794064522,
-0.009155279025435448,
0.033517688512802124,
-... |
8d881261-9fd5-49e4-840b-f9a11436fe36 | description: 'Computes an approximate quantile of a sample with relative-error guarantees.'
sidebar_position: 171
slug: /sql-reference/aggregate-functions/reference/quantileddsketch
title: 'quantileDD'
doc_type: 'reference'
Computes an approximate
quantile
of a sample with relative-error guarantees. It works by building a
DD
.
Syntax
sql
quantileDD(relative_accuracy, [level])(expr)
Arguments
expr
β Column with numeric data.
Integer
,
Float
.
Parameters
relative_accuracy
β Relative accuracy of the quantile. Possible values are in the range from 0 to 1.
Float
. The size of the sketch depends on the range of the data and the relative accuracy. The larger the range and the smaller the relative accuracy, the larger the sketch. The rough memory size of the of the sketch is
log(max_value/min_value)/relative_accuracy
. The recommended value is 0.001 or higher.
level
β Level of quantile. Optional. Possible values are in the range from 0 to 1. Default value: 0.5.
Float
.
Returned value
Approximate quantile of the specified level.
Type:
Float64
.
Example
Input table has an integer and a float columns:
text
ββaββ¬βββββbββ
β 1 β 1.001 β
β 2 β 1.002 β
β 3 β 1.003 β
β 4 β 1.004 β
βββββ΄ββββββββ
Query to calculate 0.75-quantile (third quartile):
sql
SELECT quantileDD(0.01, 0.75)(a), quantileDD(0.01, 0.75)(b) FROM example_table;
Result:
text
ββquantileDD(0.01, 0.75)(a)ββ¬βquantileDD(0.01, 0.75)(b)ββ
β 2.974233423476717 β 1.01 β
βββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantileddsketch.md"} | [
-0.016947651281952858,
-0.013619715347886086,
-0.04669088497757912,
-0.02664467692375183,
-0.0550876148045063,
-0.08590181916952133,
-0.006631446070969105,
0.08983885496854782,
-0.040659110993146896,
-0.016562316566705704,
-0.012540343217551708,
-0.054415348917245865,
0.0540078729391098,
-... |
1bd6967d-af9a-43ee-b31d-8e8a9f8b6df3 | description: 'Sorts time series by timestamp in ascending order.'
sidebar_position: 146
slug: /sql-reference/aggregate-functions/reference/timeSeriesGroupArray
title: 'timeSeriesGroupArray'
doc_type: 'reference'
timeSeriesGroupArray
Sorts time series by timestamp in ascending order.
Syntax
sql
timeSeriesGroupArray(timestamp, value)
Arguments
timestamp
- timestamp of the sample
value
- value of the time series corresponding to the
timestamp
Returned value
The function returns an array of tuples (
timestamp
,
value
) sorted by
timestamp
in ascending order.
If there are multiple values for the same
timestamp
then the function chooses the greatest of these values.
Example
sql
WITH
[110, 120, 130, 140, 140, 100]::Array(UInt32) AS timestamps,
[1, 6, 8, 17, 19, 5]::Array(Float32) AS values -- array of values corresponding to timestamps above
SELECT timeSeriesGroupArray(timestamp, value)
FROM
(
-- This subquery converts arrays of timestamps and values into rows of `timestamp`, `value`
SELECT
arrayJoin(arrayZip(timestamps, values)) AS ts_and_val,
ts_and_val.1 AS timestamp,
ts_and_val.2 AS value
);
Response:
response
ββtimeSeriesGroupArray(timestamp, value)ββββββββ
1. β [(100,5),(110,1),(120,6),(130,8),(140,19)] β
ββββββββββββββββββββββββββββββββββββββββββββββββ
Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments:
sql
WITH
[110, 120, 130, 140, 140, 100]::Array(UInt32) AS timestamps,
[1, 6, 8, 17, 19, 5]::Array(Float32) AS values -- array of values corresponding to timestamps above
SELECT timeSeriesGroupArray(timestamps, values);
:::note
This function is experimental, enable it by setting
allow_experimental_ts_to_grid_aggregate_function=true
.
::: | {"source_file": "timeSeriesGroupArray.md"} | [
-0.06375601887702942,
-0.01981058157980442,
0.03909418731927872,
-0.04085761308670044,
-0.059491582214832306,
-0.026549579575657845,
0.033977873623371124,
0.0226153451949358,
0.0008906075963750482,
0.02628250978887081,
-0.05067726969718933,
0.01289262156933546,
-0.039713628590106964,
0.016... |
d66a89b3-00ed-492f-ac8b-39746ef84103 | description: 'Selects a frequently occurring value using the heavy hitters algorithm.
If there is a value that occurs more than in half the cases in each of the query
execution threads, this value is returned. Normally, the result is nondeterministic.'
sidebar_position: 104
slug: /sql-reference/aggregate-functions/reference/anyheavy
title: 'anyHeavy'
doc_type: 'reference'
anyHeavy
Selects a frequently occurring value using the
heavy hitters
algorithm. If there is a value that occurs more than in half the cases in each of the query's execution threads, this value is returned. Normally, the result is nondeterministic.
sql
anyHeavy(column)
Arguments
column
β The column name.
Example
Take the
OnTime
data set and select any frequently occurring value in the
AirlineID
column.
sql
SELECT anyHeavy(AirlineID) AS res
FROM ontime
text
ββββresββ
β 19690 β
βββββββββ | {"source_file": "anyheavy.md"} | [
0.05716777592897415,
0.0018218717304989696,
-0.016273029148578644,
0.11966165155172348,
-0.004298589192330837,
-0.04024434834718704,
0.004487235564738512,
-0.008138642646372318,
0.06534328311681747,
0.009295839816331863,
0.03844776377081871,
-0.023899080231785774,
-0.007323036901652813,
-0... |
3cfbc994-40ba-4e65-97e7-66fa1367f195 | description: 'Computes an approximate quantile of a sample consisting of bfloat16
numbers.'
sidebar_position: 171
slug: /sql-reference/aggregate-functions/reference/quantilebfloat16
title: 'quantileBFloat16'
doc_type: 'reference'
quantileBFloat16Weighted
Like
quantileBFloat16
but takes into account the weight of each sequence member.
Computes an approximate
quantile
of a sample consisting of
bfloat16
numbers.
bfloat16
is a floating-point data type with 1 sign bit, 8 exponent bits and 7 fraction bits.
The function converts input values to 32-bit floats and takes the most significant 16 bits. Then it calculates
bfloat16
quantile value and converts the result to a 64-bit float by appending zero bits.
The function is a fast quantile estimator with a relative error no more than 0.390625%.
Syntax
sql
quantileBFloat16[(level)](expr)
Alias:
medianBFloat16
Arguments
expr
β Column with numeric data.
Integer
,
Float
.
Parameters
level
β Level of quantile. Optional. Possible values are in the range from 0 to 1. Default value: 0.5.
Float
.
Returned value
Approximate quantile of the specified level.
Type:
Float64
.
Example
Input table has an integer and a float columns:
text
ββaββ¬βββββbββ
β 1 β 1.001 β
β 2 β 1.002 β
β 3 β 1.003 β
β 4 β 1.004 β
βββββ΄ββββββββ
Query to calculate 0.75-quantile (third quartile):
sql
SELECT quantileBFloat16(0.75)(a), quantileBFloat16(0.75)(b) FROM example_table;
Result:
text
ββquantileBFloat16(0.75)(a)ββ¬βquantileBFloat16(0.75)(b)ββ
β 3 β 1 β
βββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ
Note that all floating point values in the example are truncated to 1.0 when converting to
bfloat16
.
See Also
median
quantiles | {"source_file": "quantilebfloat16.md"} | [
-0.040612224489450455,
0.041230209171772,
-0.07102710008621216,
-0.028071127831935883,
-0.005505031906068325,
-0.12508203089237213,
0.04078766331076622,
0.06269349157810211,
-0.03989233449101448,
-0.03818473592400551,
-0.04769585654139519,
-0.07785496115684509,
-0.01966102607548237,
-0.013... |
9b9f1122-55cc-4df3-bfb1-8ceae3e8d2ea | description: 'Aggregate function that calculates the maximum across a group of values.'
sidebar_position: 162
slug: /sql-reference/aggregate-functions/reference/max
title: 'max'
doc_type: 'reference'
Aggregate function that calculates the maximum across a group of values.
Example:
sql
SELECT max(salary) FROM employees;
sql
SELECT department, max(salary) FROM employees GROUP BY department;
If you need non-aggregate function to choose a maximum of two values, see
greatest
:
sql
SELECT greatest(a, b) FROM table; | {"source_file": "max.md"} | [
0.020406626164913177,
0.0009003105224110186,
-0.019086048007011414,
-0.059947073459625244,
-0.12713652849197388,
-0.015536082908511162,
-0.04306602105498314,
0.09482674300670624,
-0.06530582904815674,
0.026967622339725494,
0.01183082815259695,
0.009827069006860256,
0.04100201278924942,
-0.... |
4091364b-386a-401f-a128-bc4810b86016 | description: 'Applies bit-wise
XOR
for series of numbers.'
sidebar_position: 153
slug: /sql-reference/aggregate-functions/reference/groupbitxor
title: 'groupBitXor'
doc_type: 'reference'
groupBitXor
Applies bit-wise
XOR
for series of numbers.
sql
groupBitXor(expr)
Arguments
expr
β An expression that results in
UInt*
or
Int*
type.
Return value
Value of the
UInt*
or
Int*
type.
Example
Test data:
text
binary decimal
00101100 = 44
00011100 = 28
00001101 = 13
01010101 = 85
Query:
sql
SELECT groupBitXor(num) FROM t
Where
num
is the column with the test data.
Result:
text
binary decimal
01101000 = 104 | {"source_file": "groupbitxor.md"} | [
0.01636764220893383,
0.048841290175914764,
-0.064630426466465,
0.030375557020306587,
-0.0621243380010128,
-0.04555104300379753,
0.07872679084539413,
0.02129516936838627,
-0.015228262171149254,
-0.03507579118013382,
0.023119524121284485,
-0.06175500154495239,
0.06072166562080383,
-0.0654093... |
e1e5109f-e5e9-4f0b-b329-cab5d99ca6cd | description: 'With the determined precision computes the quantile of a numeric data
sequence according to the weight of each sequence member.'
sidebar_position: 181
slug: /sql-reference/aggregate-functions/reference/quantiletimingweighted
title: 'quantileTimingWeighted'
doc_type: 'reference'
quantileTimingWeighted
With the determined precision computes the
quantile
of a numeric data sequence according to the weight of each sequence member.
The result is deterministic (it does not depend on the query processing order). The function is optimized for working with sequences which describe distributions like loading web pages times or backend response times.
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
Syntax
sql
quantileTimingWeighted(level)(expr, weight)
Alias:
medianTimingWeighted
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β
Expression
over a column values returning a
Float*
-type number.
- If negative values are passed to the function, the behavior is undefined.
- If the value is greater than 30,000 (a page loading time of more than 30 seconds), it is assumed to be 30,000.
weight
β Column with weights of sequence elements. Weight is a number of value occurrences.
Accuracy
The calculation is accurate if:
Total number of values does not exceed 5670.
Total number of values exceeds 5670, but the page loading time is less than 1024ms.
Otherwise, the result of the calculation is rounded to the nearest multiple of 16 ms.
:::note
For calculating page loading time quantiles, this function is more effective and accurate than
quantile
.
:::
Returned value
Quantile of the specified level.
Type:
Float32
.
:::note
If no values are passed to the function (when using
quantileTimingIf
),
NaN
is returned. The purpose of this is to differentiate these cases from cases that result in zero. See
ORDER BY clause
for notes on sorting
NaN
values.
:::
Example
Input table:
text
ββresponse_timeββ¬βweightββ
β 68 β 1 β
β 104 β 2 β
β 112 β 3 β
β 126 β 2 β
β 138 β 1 β
β 162 β 1 β
βββββββββββββββββ΄βββββββββ
Query:
sql
SELECT quantileTimingWeighted(response_time, weight) FROM t
Result:
text
ββquantileTimingWeighted(response_time, weight)ββ
β 112 β
βββββββββββββββββββββββββββββββββββββββββββββββββ
quantilesTimingWeighted
Same as
quantileTimingWeighted
, but accept multiple parameters with quantile levels and return an Array filled with many values of that quantiles. | {"source_file": "quantiletimingweighted.md"} | [
-0.07569252699613571,
-0.020484542474150658,
0.00596612086519599,
0.02069641463458538,
-0.11344544589519501,
-0.07444468885660172,
0.03058898262679577,
0.06772001832723618,
0.007169595453888178,
-0.01677054539322853,
-0.011036128737032413,
-0.08333270251750946,
0.010331098921597004,
-0.059... |
85145795-ccef-41bc-a14e-d93430c0dbdf | quantilesTimingWeighted
Same as
quantileTimingWeighted
, but accept multiple parameters with quantile levels and return an Array filled with many values of that quantiles.
Example
Input table:
text
ββresponse_timeββ¬βweightββ
β 68 β 1 β
β 104 β 2 β
β 112 β 3 β
β 126 β 2 β
β 138 β 1 β
β 162 β 1 β
βββββββββββββββββ΄βββββββββ
Query:
sql
SELECT quantilesTimingWeighted(0,5, 0.99)(response_time, weight) FROM t
Result:
text
ββquantilesTimingWeighted(0.5, 0.99)(response_time, weight)ββ
β [112,162] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantiletimingweighted.md"} | [
-0.004608729854226112,
0.04080076143145561,
-0.013262886554002762,
0.01771734654903412,
-0.0792820081114769,
-0.05885891243815422,
0.056768208742141724,
-0.022324323654174805,
-0.0036376318894326687,
-0.03146873787045479,
-0.03051748313009739,
-0.07688123732805252,
-0.03398917242884636,
0.... |
cbaa1dd7-2072-48f8-a93d-7881f9cde40e | description: 'Computes quantile of a numeric data sequence using linear interpolation,
taking into account the weight of each element.'
sidebar_position: 176
slug: /sql-reference/aggregate-functions/reference/quantileInterpolatedWeighted
title: 'quantileInterpolatedWeighted'
doc_type: 'reference'
quantileInterpolatedWeighted
Computes
quantile
of a numeric data sequence using linear interpolation, taking into account the weight of each element.
To get the interpolated value, all the passed values are combined into an array, which are then sorted by their corresponding weights. Quantile interpolation is then performed using the
weighted percentile method
by building a cumulative distribution based on weights and then a linear interpolation is performed using the weights and the values to compute the quantiles.
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
Syntax
sql
quantileInterpolatedWeighted(level)(expr, weight)
Alias:
medianInterpolatedWeighted
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
weight
β Column with weights of sequence members. Weight is a number of value occurrences.
Returned value
Quantile of the specified level.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Input table:
text
ββnββ¬βvalββ
β 0 β 3 β
β 1 β 2 β
β 2 β 1 β
β 5 β 4 β
βββββ΄ββββββ
Query:
sql
SELECT quantileInterpolatedWeighted(n, val) FROM t
Result:
text
ββquantileInterpolatedWeighted(n, val)ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantileinterpolatedweighted.md"} | [
-0.08718591928482056,
0.00651442538946867,
0.042400479316711426,
0.005877922289073467,
-0.09403061866760254,
-0.06837435066699982,
0.03003528341650963,
0.08329331129789352,
-0.043250251561403275,
-0.0010106442496180534,
-0.02019154466688633,
-0.08225304633378983,
0.061955928802490234,
-0.0... |
ff1ae1e3-b5fb-4525-9ad9-2f494d28e2d8 | description: 'The result is equal to the square root of varPop.'
sidebar_position: 188
slug: /sql-reference/aggregate-functions/reference/stddevpop
title: 'stddevPop'
doc_type: 'reference'
stddevPop
The result is equal to the square root of
varPop
.
Aliases:
STD
,
STDDEV_POP
.
:::note
This function uses a numerically unstable algorithm. If you need
numerical stability
in calculations, use the
stddevPopStable
function. It works slower but provides a lower computational error.
:::
Syntax
sql
stddevPop(x)
Parameters
x
: Population of values to find the standard deviation of.
(U)Int*
,
Float*
,
Decimal*
.
Returned value
Square root of standard deviation of
x
.
Float64
.
Example
Query:
```sql
DROP TABLE IF EXISTS test_data;
CREATE TABLE test_data
(
population UInt8,
)
ENGINE = Log;
INSERT INTO test_data VALUES (3),(3),(3),(4),(4),(5),(5),(7),(11),(15);
SELECT
stddevPop(population) AS stddev
FROM test_data;
```
Result:
response
βββββββββββββstddevββ
β 3.794733192202055 β
βββββββββββββββββββββ | {"source_file": "stddevpop.md"} | [
0.0118697015568614,
0.007838724181056023,
0.013815626502037048,
0.04216237738728523,
-0.06820069253444672,
-0.046927377581596375,
0.03465772420167923,
0.1298537701368332,
0.03461333364248276,
-0.0236872136592865,
0.06700471043586731,
-0.02060377225279808,
0.056634530425071716,
-0.128103226... |
eeaafade-a941-4e7f-9615-6e51c6fbc1a7 | description: 'Calculates the approximate number of different argument values.'
sidebar_position: 205
slug: /sql-reference/aggregate-functions/reference/uniqcombined
title: 'uniqCombined'
doc_type: 'reference'
uniqCombined
Calculates the approximate number of different argument values.
sql
uniqCombined(HLL_precision)(x[, ...])
The
uniqCombined
function is a good choice for calculating the number of different values.
Arguments
HLL_precision
: The base-2 logarithm of the number of cells in
HyperLogLog
. Optional, you can use the function as
uniqCombined(x[, ...])
. The default value for
HLL_precision
is 17, which is effectively 96 KiB of space (2^17 cells, 6 bits each).
X
: A variable number of parameters. Parameters can be
Tuple
,
Array
,
Date
,
DateTime
,
String
, or numeric types.
Returned value
A number
UInt64
-type number.
Implementation details
The
uniqCombined
function:
Calculates a hash (64-bit hash for
String
and 32-bit otherwise) for all parameters in the aggregate, then uses it in calculations.
Uses a combination of three algorithms: array, hash table, and HyperLogLog with an error correction table.
For a small number of distinct elements, an array is used.
When the set size is larger, a hash table is used.
For a larger number of elements, HyperLogLog is used, which will occupy a fixed amount of memory.
Provides the result deterministically (it does not depend on the query processing order).
:::note
Since it uses a 32-bit hash for non-
String
types, the result will have very high error for cardinalities significantly larger than
UINT_MAX
(error will raise quickly after a few tens of billions of distinct values), hence in this case you should use
uniqCombined64
.
:::
Compared to the
uniq
function, the
uniqCombined
function:
Consumes several times less memory.
Calculates with several times higher accuracy.
Usually has slightly lower performance. In some scenarios,
uniqCombined
can perform better than
uniq
, for example, with distributed queries that transmit a large number of aggregation states over the network.
Example
Query:
sql
SELECT uniqCombined(number) FROM numbers(1e6);
Result:
response
ββuniqCombined(number)ββ
β 1001148 β -- 1.00 million
ββββββββββββββββββββββββ
See the example section of
uniqCombined64
for an example of the difference between
uniqCombined
and
uniqCombined64
for much larger inputs.
See Also
uniq
uniqCombined64
uniqHLL12
uniqExact
uniqTheta | {"source_file": "uniqcombined.md"} | [
0.04619449004530907,
0.02211967296898365,
-0.08594276756048203,
0.018622880801558495,
-0.10457743704319,
-0.012369879521429539,
0.08159440010786057,
0.01671214960515499,
-0.00554317282512784,
-0.0145995132625103,
-0.025472071021795273,
-0.006384633481502533,
0.05998195707798004,
-0.0682409... |
f11ce719-461c-4353-8a63-040379de7eb6 | description: 'Calculates the value of the population covariance'
sidebar_position: 123
slug: /sql-reference/aggregate-functions/reference/covarpopstable
title: 'covarPopStable'
doc_type: 'reference'
covarPopStable
Calculates the value of the population covariance:
$$
\frac{\Sigma{(x - \bar{x})(y - \bar{y})}}{n}
$$
It is similar to the
covarPop
function, but uses a numerically stable algorithm. As a result,
covarPopStable
is slower than
covarPop
but produces a more accurate result.
Syntax
sql
covarPop(x, y)
Arguments
x
β first variable.
(U)Int*
,
Float*
,
Decimal
.
y
β second variable.
(U)Int*
,
Float*
,
Decimal
.
Returned Value
The population covariance between
x
and
y
.
Float64
.
Example
Query:
sql
DROP TABLE IF EXISTS series;
CREATE TABLE series(i UInt32, x_value Float64, y_value Float64) ENGINE = Memory;
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6,-4.4),(2, -9.6,3),(3, -1.3,-4),(4, 5.3,9.7),(5, 4.4,0.037),(6, -8.6,-7.8),(7, 5.1,9.3),(8, 7.9,-3.6),(9, -8.2,0.62),(10, -3,7.3);
sql
SELECT covarPopStable(x_value, y_value)
FROM
(
SELECT
x_value,
y_value
FROM series
);
Result:
reference
ββcovarPopStable(x_value, y_value)ββ
β 6.485648 β
ββββββββββββββββββββββββββββββββββββ | {"source_file": "covarpopstable.md"} | [
-0.01076620165258646,
-0.08199422061443329,
-0.009812559001147747,
0.03154883161187172,
-0.05192882567644119,
-0.06601189821958542,
0.05660043656826019,
0.02863302081823349,
-0.0007879528566263616,
0.0029820252675563097,
0.06883173435926437,
-0.02937191165983677,
0.011690194718539715,
-0.0... |
98701afa-d478-489a-a63e-d35f6f98528f | description: 'Aggregate function that calculates PromQL-like changes over time series data on the specified grid.'
sidebar_position: 229
slug: /sql-reference/aggregate-functions/reference/timeSeriesChangesToGrid
title: 'timeSeriesChangesToGrid'
doc_type: 'reference'
Aggregate function that takes time series data as pairs of timestamps and values and calculates
PromQL-like changes
from this data on a regular time grid described by start timestamp, end timestamp and step. For each point on the grid the samples for calculating
changes
are considered within the specified time window.
Parameters:
-
start timestamp
- specifies start of the grid
-
end timestamp
- specifies end of the grid
-
grid step
- specifies step of the grid in seconds
-
staleness
- specified the maximum "staleness" in seconds of the considered samples
Arguments:
-
timestamp
- timestamp of the sample
-
value
- value of the time series corresponding to the
timestamp
Return value:
changes
values on the specified grid as an
Array(Nullable(Float64))
. The returned array contains one value for each time grid point. The value is NULL if there are no samples within the window to calculate the changes value for a particular grid point.
Example:
The following query calculates
changes
values on the grid [90, 105, 120, 135, 150, 165, 180, 195, 210, 225]:
sql
WITH
-- NOTE: the gap between 130 and 190 is to show how values are filled for ts = 180 according to window parameter
[110, 120, 130, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 5, 5, 8, 12, 13]::Array(Float32) AS values, -- array of values corresponding to timestamps above
90 AS start_ts, -- start of timestamp grid
90 + 135 AS end_ts, -- end of timestamp grid
15 AS step_seconds, -- step of timestamp grid
45 AS window_seconds -- "staleness" window
SELECT timeSeriesChangesToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)
FROM
(
-- This subquery converts arrays of timestamps and values into rows of `timestamp`, `value`
SELECT
arrayJoin(arrayZip(timestamps, values)) AS ts_and_val,
ts_and_val.1 AS timestamp,
ts_and_val.2 AS value
);
Response:
response
ββtimeSeriesChangesToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)ββ
1. β [NULL,NULL,0,1,1,1,NULL,0,1,2] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments: | {"source_file": "timeSeriesChangesToGrid.md"} | [
-0.07435086369514465,
-0.004902002401649952,
-0.0679187923669815,
0.054010216146707535,
-0.030211759731173515,
-0.023779144510626793,
0.029958490282297134,
0.0473550446331501,
0.0635761246085167,
0.03150938078761101,
-0.04286261647939682,
-0.07587293535470963,
-0.01376335695385933,
-0.0238... |
e0cd1042-ebb1-4932-8278-75bbe4754b92 | Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments:
sql
WITH
[110, 120, 130, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 5, 5, 8, 12, 13]::Array(Float32) AS values,
90 AS start_ts,
90 + 135 AS end_ts,
15 AS step_seconds,
45 AS window_seconds
SELECT timeSeriesChangesToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamps, values);
:::note
This function is experimental, enable it by setting
allow_experimental_ts_to_grid_aggregate_function=true
.
::: | {"source_file": "timeSeriesChangesToGrid.md"} | [
-0.030458806082606316,
0.007097488734871149,
-0.0040633799508214,
0.033053044229745865,
-0.049320198595523834,
0.0157928504049778,
0.051577214151620865,
0.0034328089095652103,
0.003394232364371419,
0.0006997704040259123,
-0.07335778325796127,
-0.10307113081216812,
-0.026301991194486618,
0.... |
b29caf3d-7b9f-48af-8872-80c938e8591b | description: 'Calculates the
arg
value for a maximum
val
value.'
sidebar_position: 109
slug: /sql-reference/aggregate-functions/reference/argmax
title: 'argMax'
doc_type: 'reference'
argMax
Calculates the
arg
value for a maximum
val
value. If there are multiple rows with equal
val
being the maximum, which of the associated
arg
is returned is not deterministic.
Both parts the
arg
and the
max
behave as
aggregate functions
, they both
skip
Null
during processing and return not
Null
values if not
Null
values are available.
Syntax
sql
argMax(arg, val)
Arguments
arg
β Argument.
val
β Value.
Returned value
arg
value that corresponds to maximum
val
value.
Type: matches
arg
type.
Example
Input table:
text
ββuserββββββ¬βsalaryββ
β director β 5000 β
β manager β 3000 β
β worker β 1000 β
ββββββββββββ΄βββββββββ
Query:
sql
SELECT argMax(user, salary) FROM salary;
Result:
text
ββargMax(user, salary)ββ
β director β
ββββββββββββββββββββββββ
Extended example
```sql
CREATE TABLE test
(
a Nullable(String),
b Nullable(Int64)
)
ENGINE = Memory AS
SELECT *
FROM VALUES(('a', 1), ('b', 2), ('c', 2), (NULL, 3), (NULL, NULL), ('d', NULL));
SELECT * FROM test;
ββaβββββ¬ββββbββ
β a β 1 β
β b β 2 β
β c β 2 β
β α΄Ία΅α΄Έα΄Έ β 3 β
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β
β d β α΄Ία΅α΄Έα΄Έ β
ββββββββ΄βββββββ
SELECT argMax(a, b), max(b) FROM test;
ββargMax(a, b)ββ¬βmax(b)ββ
β b β 3 β -- argMax = 'b' because it the first not Null value, max(b) is from another row!
ββββββββββββββββ΄βββββββββ
SELECT argMax(tuple(a), b) FROM test;
ββargMax(tuple(a), b)ββ
β (NULL) β -- The a
Tuple
that contains only a
NULL
value is not
NULL
, so the aggregate functions won't skip that row because of that
NULL
value
βββββββββββββββββββββββ
SELECT (argMax((a, b), b) as t).1 argMaxA, t.2 argMaxB FROM test;
ββargMaxAββ¬βargMaxBββ
β α΄Ία΅α΄Έα΄Έ β 3 β -- you can use Tuple and get both (all - tuple(*)) columns for the according max(b)
βββββββββββ΄ββββββββββ
SELECT argMax(a, b), max(b) FROM test WHERE a IS NULL AND b IS NULL;
ββargMax(a, b)ββ¬βmax(b)ββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β -- All aggregated rows contains at least one
NULL
value because of the filter, so all rows are skipped, therefore the result will be
NULL
ββββββββββββββββ΄βββββββββ
SELECT argMax(a, (b,a)) FROM test;
ββargMax(a, tuple(b, a))ββ
β c β -- There are two rows with b=2,
Tuple
in the
Max
allows to get not the first
arg
ββββββββββββββββββββββββββ
SELECT argMax(a, tuple(b)) FROM test;
ββargMax(a, tuple(b))ββ
β b β --
Tuple
can be used in
Max
to not skip Nulls in
Max
βββββββββββββββββββββββ
```
See also
Tuple | {"source_file": "argmax.md"} | [
-0.025974879041314125,
-0.016977008432149887,
-0.0342649482190609,
0.04487421736121178,
-0.07792097330093384,
-0.04594457522034645,
0.09749827533960342,
0.02084197662770748,
-0.06957491487264633,
0.029229916632175446,
0.00868988037109375,
0.001871206215582788,
0.0710449069738388,
-0.087207... |
b077725c-4ab5-4aa5-a48a-3867de481b02 | description: 'Applies bit-wise
OR
to a series of numbers.'
sidebar_position: 152
slug: /sql-reference/aggregate-functions/reference/groupbitor
title: 'groupBitOr'
doc_type: 'reference'
groupBitOr
Applies bit-wise
OR
to a series of numbers.
sql
groupBitOr(expr)
Arguments
expr
β An expression that results in
UInt*
or
Int*
type.
Returned value
Value of the
UInt*
or
Int*
type.
Example
Test data:
text
binary decimal
00101100 = 44
00011100 = 28
00001101 = 13
01010101 = 85
Query:
sql
SELECT groupBitOr(num) FROM t
Where
num
is the column with the test data.
Result:
text
binary decimal
01111101 = 125 | {"source_file": "groupbitor.md"} | [
0.009615493938326836,
0.06514894217252731,
-0.08342631906270981,
0.0410761758685112,
-0.06899791955947876,
-0.038279034197330475,
0.0717511996626854,
0.0403621606528759,
-0.004541604779660702,
-0.03160467743873596,
0.017981192097067833,
-0.06924159079790115,
0.03749419003725052,
-0.0654359... |
5ec34fea-304a-414e-9aa1-ea52c33cb79c | description: 'Computes an approximate quantile of a numeric data sequence using the
t-digest algorithm.'
sidebar_position: 179
slug: /sql-reference/aggregate-functions/reference/quantiletdigestweighted
title: 'quantileTDigestWeighted'
doc_type: 'reference'
quantileTDigestWeighted
Computes an approximate
quantile
of a numeric data sequence using the
t-digest
algorithm. The function takes into account the weight of each sequence member. The maximum error is 1%. Memory consumption is
log(n)
, where
n
is a number of values.
The performance of the function is lower than performance of
quantile
or
quantileTiming
. In terms of the ratio of State size to precision, this function is much better than
quantile
.
The result depends on the order of running the query, and is nondeterministic.
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
:::note
Using
quantileTDigestWeighted
is not recommended for tiny data sets
and can lead to significant error. In this case, consider possibility of using
quantileTDigest
instead.
:::
Syntax
sql
quantileTDigestWeighted(level)(expr, weight)
Alias:
medianTDigestWeighted
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
weight
β Column with weights of sequence elements. Weight is a number of value occurrences.
Returned value
Approximate quantile of the specified level.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Query:
sql
SELECT quantileTDigestWeighted(number, 1) FROM numbers(10)
Result:
text
ββquantileTDigestWeighted(number, 1)ββ
β 4.5 β
ββββββββββββββββββββββββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantiletdigestweighted.md"} | [
-0.06186294183135033,
0.021809106692671776,
-0.02715003304183483,
0.02117694541811943,
-0.06182629242539406,
-0.09042855352163315,
0.029557794332504272,
0.07503392547369003,
0.06874222308397293,
-0.017988601699471474,
-0.027523130178451538,
-0.012386256828904152,
0.00804829876869917,
-0.04... |
ab735232-9c9f-40db-9bfd-3b3cf6730297 | description: 'Calculates the list of distinct data types stored in Dynamic column.'
sidebar_position: 215
slug: /sql-reference/aggregate-functions/reference/distinctdynamictypes
title: 'distinctDynamicTypes'
doc_type: 'reference'
distinctDynamicTypes
Calculates the list of distinct data types stored in
Dynamic
column.
Syntax
sql
distinctDynamicTypes(dynamic)
Arguments
dynamic
β
Dynamic
column.
Returned Value
The sorted list of data type names
Array(String)
.
Example
Query:
sql
DROP TABLE IF EXISTS test_dynamic;
CREATE TABLE test_dynamic(d Dynamic) ENGINE = Memory;
INSERT INTO test_dynamic VALUES (42), (NULL), ('Hello'), ([1, 2, 3]), ('2020-01-01'), (map(1, 2)), (43), ([4, 5]), (NULL), ('World'), (map(3, 4))
sql
SELECT distinctDynamicTypes(d) FROM test_dynamic;
Result:
reference
ββdistinctDynamicTypes(d)βββββββββββββββββββββββββββββββββββββββ
β ['Array(Int64)','Date','Int64','Map(UInt8, UInt8)','String'] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "distinctdynamictypes.md"} | [
0.018143752589821815,
0.008946170099079609,
0.040637582540512085,
0.07512930780649185,
-0.06756148487329483,
-0.018075257539749146,
0.06510226428508759,
0.04690433666110039,
0.0007808972150087357,
0.004132863599807024,
0.02637382596731186,
-0.0024518403224647045,
0.015081058256328106,
-0.0... |
061333b0-f2bb-4b72-85f6-a01f330e9296 | description: 'Totals one or more
value
arrays according to the keys specified in the
key
array. Returns a tuple of arrays: keys in sorted order, followed by values summed for
the corresponding keys without overflow.'
sidebar_position: 198
slug: /sql-reference/aggregate-functions/reference/summap
title: 'sumMap'
doc_type: 'reference'
sumMap
Totals one or more
value
arrays according to the keys specified in the
key
array. Returns a tuple of arrays: keys in sorted order, followed by values summed for the corresponding keys without overflow.
Syntax
sumMap(key <Array>, value1 <Array>[, value2 <Array>, ...])
Array type
.
sumMap(Tuple(key <Array>[, value1 <Array>, value2 <Array>, ...]))
Tuple type
.
Alias:
sumMappedArrays
.
Arguments
key
:
Array
of keys.
value1
,
value2
, ...:
Array
of values to sum for each key.
Passing a tuple of key and value arrays is a synonym to passing separately an array of keys and arrays of values.
:::note
The number of elements in
key
and all
value
arrays must be the same for each row that is totaled.
:::
Returned Value
Returns a tuple of arrays: the first array contains keys in sorted order, followed by arrays containing values summed for the corresponding keys.
Example
First we create a table called
sum_map
, and insert some data into it. Arrays of keys and values are stored separately as a column called
statusMap
of
Nested
type, and together as a column called
statusMapTuple
of
tuple
type to illustrate the use of the two different syntaxes of this function described above.
Query:
sql
CREATE TABLE sum_map(
date Date,
timeslot DateTime,
statusMap Nested(
status UInt16,
requests UInt64
),
statusMapTuple Tuple(Array(Int32), Array(Int32))
) ENGINE = Log;
sql
INSERT INTO sum_map VALUES
('2000-01-01', '2000-01-01 00:00:00', [1, 2, 3], [10, 10, 10], ([1, 2, 3], [10, 10, 10])),
('2000-01-01', '2000-01-01 00:00:00', [3, 4, 5], [10, 10, 10], ([3, 4, 5], [10, 10, 10])),
('2000-01-01', '2000-01-01 00:01:00', [4, 5, 6], [10, 10, 10], ([4, 5, 6], [10, 10, 10])),
('2000-01-01', '2000-01-01 00:01:00', [6, 7, 8], [10, 10, 10], ([6, 7, 8], [10, 10, 10]));
Next, we query the table using the
sumMap
function, making use of both array and tuple type syntaxes:
Query:
sql
SELECT
timeslot,
sumMap(statusMap.status, statusMap.requests),
sumMap(statusMapTuple)
FROM sum_map
GROUP BY timeslot
Result:
text
βββββββββββββtimeslotββ¬βsumMap(statusMap.status, statusMap.requests)ββ¬βsumMap(statusMapTuple)ββββββββββ
β 2000-01-01 00:00:00 β ([1,2,3,4,5],[10,10,20,10,10]) β ([1,2,3,4,5],[10,10,20,10,10]) β
β 2000-01-01 00:01:00 β ([4,5,6,7,8],[10,10,20,10,10]) β ([4,5,6,7,8],[10,10,20,10,10]) β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββ
Example with Multiple Value Arrays | {"source_file": "summap.md"} | [
0.03719814121723175,
0.03636996075510979,
-0.006188944913446903,
-0.02024925872683525,
-0.037327077239751816,
-0.008316866122186184,
0.09930974990129471,
0.02868168242275715,
-0.057210762053728104,
0.01244570966809988,
-0.0490320548415184,
-0.012723305262625217,
0.06834381818771362,
-0.055... |
cebbc1ab-0ed9-4628-b79e-17871bbe691b | Example with Multiple Value Arrays
sumMap
also supports aggregating multiple value arrays simultaneously.
This is useful when you have related metrics that share the same keys.
```sql title="Query"
CREATE TABLE multi_metrics(
date Date,
browser_metrics Nested(
browser String,
impressions UInt32,
clicks UInt32
)
)
ENGINE = MergeTree()
ORDER BY tuple();
INSERT INTO multi_metrics VALUES
('2000-01-01', ['Firefox', 'Chrome'], [100, 200], [10, 25]),
('2000-01-01', ['Chrome', 'Safari'], [150, 50], [20, 5]),
('2000-01-01', ['Firefox', 'Edge'], [80, 40], [8, 4]);
SELECT
sumMap(browser_metrics.browser, browser_metrics.impressions, browser_metrics.clicks) AS result
FROM multi_metrics;
```
text title="Response"
ββresultβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β (['Chrome', 'Edge', 'Firefox', 'Safari'], [350, 40, 180, 50], [45, 4, 18, 5]) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
In this example:
- The result tuple contains three arrays
- First array: keys (browser names) in sorted order
- Second array: total impressions for each browser
- Third array: total clicks for each browser
See Also
Map combinator for Map datatype
sumMapWithOverflow | {"source_file": "summap.md"} | [
0.04589028283953667,
-0.033777184784412384,
0.02515992894768715,
-0.006076439283788204,
-0.07580462843179703,
0.00553175201639533,
0.08402162045240402,
0.01038214098662138,
-0.03251590579748154,
-0.060684800148010254,
0.00877799279987812,
-0.09360779821872711,
0.037971317768096924,
-0.0449... |
e7827485-fa64-4da1-a313-10f120819ac1 | description: 'Computes the sample kurtosis of a sequence.'
sidebar_position: 158
slug: /sql-reference/aggregate-functions/reference/kurtsamp
title: 'kurtSamp'
doc_type: 'reference'
kurtSamp
Computes the
sample kurtosis
of a sequence.
It represents an unbiased estimate of the kurtosis of a random variable if passed values form its sample.
sql
kurtSamp(expr)
Arguments
expr
β
Expression
returning a number.
Returned value
The kurtosis of the given distribution. Type β
Float64
. If
n <= 1
(
n
is a size of the sample), then the function returns
nan
.
Example
sql
SELECT kurtSamp(value) FROM series_with_value_column; | {"source_file": "kurtsamp.md"} | [
-0.06263129413127899,
-0.05105283111333847,
-0.002491510007530451,
-0.007371077314019203,
-0.0048162383027374744,
-0.08078417181968689,
0.06984137743711472,
0.053451742976903915,
0.03257302939891815,
0.07906223833560944,
0.07928722351789474,
-0.041856370866298676,
-0.007839646190404892,
-0... |
caedfbb8-0a1d-4aab-8b8a-71a53466d036 | description: 'This function implements stochastic logistic regression. It can be used
for binary classification problem, supports the same custom parameters as stochasticLinearRegression
and works the same way.'
sidebar_position: 193
slug: /sql-reference/aggregate-functions/reference/stochasticlogisticregression
title: 'stochasticLogisticRegression'
doc_type: 'reference'
stochasticLogisticRegression
This function implements stochastic logistic regression. It can be used for binary classification problem, supports the same custom parameters as stochasticLinearRegression and works the same way.
Parameters {#parameters}
Parameters are exactly the same as in stochasticLinearRegression:
learning rate
,
l2 regularization coefficient
,
mini-batch size
,
method for updating weights
.
For more information see
parameters
.
text
stochasticLogisticRegression(1.0, 1.0, 10, 'SGD')
1.
Fitting
See the `Fitting` section in the [stochasticLinearRegression](/sql-reference/aggregate-functions/reference/stochasticlinearregression) description.
Predicted labels have to be in \[-1, 1\].
2.
Predicting
Using saved state we can predict probability of object having label `1`.
```sql
WITH (SELECT state FROM your_model) AS model SELECT
evalMLMethod(model, param1, param2) FROM test_data
```
The query will return a column of probabilities. Note that first argument of `evalMLMethod` is `AggregateFunctionState` object, next are columns of features.
We can also set a bound of probability, which assigns elements to different labels.
```sql
SELECT ans < 1.1 AND ans > 0.5 FROM
(WITH (SELECT state FROM your_model) AS model SELECT
evalMLMethod(model, param1, param2) AS ans FROM test_data)
```
Then the result will be labels.
`test_data` is a table like `train_data` but may not contain target value.
See Also
stochasticLinearRegression
Difference between linear and logistic regressions. | {"source_file": "stochasticlogisticregression.md"} | [
-0.04807903245091438,
-0.05999905616044998,
-0.07462777197360992,
0.04075859487056732,
-0.035920970141887665,
0.002290223026648164,
0.030580462887883186,
0.03227803856134415,
-0.08260888606309891,
-0.07290278375148773,
0.004416422452777624,
-0.01197010837495327,
-0.004685898311436176,
-0.1... |
f5062210-e687-4027-9002-52d4090e1b22 | description: 'Calculates the exponential moving average of values for the determined
time.'
sidebar_position: 132
slug: /sql-reference/aggregate-functions/reference/exponentialMovingAverage
title: 'exponentialMovingAverage'
doc_type: 'reference'
exponentialMovingAverage {#exponentialmovingaverage}
Calculates the exponential moving average of values for the determined time.
Syntax
sql
exponentialMovingAverage(x)(value, timeunit)
Each
value
corresponds to the determinate
timeunit
. The half-life
x
is the time lag at which the exponential weights decay by one-half. The function returns a weighted average: the older the time point, the less weight the corresponding value is considered to be.
Arguments
value
β Value.
Integer
,
Float
or
Decimal
.
timeunit
β Timeunit.
Integer
,
Float
or
Decimal
. Timeunit is not timestamp (seconds), it's -- an index of the time interval. Can be calculated using
intDiv
.
Parameters
x
β Half-life period.
Integer
,
Float
or
Decimal
.
Returned values
Returns an
exponentially smoothed moving average
of the values for the past
x
time at the latest point of time.
Type:
Float64
.
Examples
Input table:
text
βββtemperatureββ¬βtimestampβββ
β 95 β 1 β
β 95 β 2 β
β 95 β 3 β
β 96 β 4 β
β 96 β 5 β
β 96 β 6 β
β 96 β 7 β
β 97 β 8 β
β 97 β 9 β
β 97 β 10 β
β 97 β 11 β
β 98 β 12 β
β 98 β 13 β
β 98 β 14 β
β 98 β 15 β
β 99 β 16 β
β 99 β 17 β
β 99 β 18 β
β 100 β 19 β
β 100 β 20 β
ββββββββββββββββ΄βββββββββββββ
Query:
sql
SELECT exponentialMovingAverage(5)(temperature, timestamp);
Result:
text
βββexponentialMovingAverage(5)(temperature, timestamp)βββ
β 92.25779635374204 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Query:
sql
SELECT
value,
time,
round(exp_smooth, 3),
bar(exp_smooth, 0, 1, 50) AS bar
FROM
(
SELECT
(number = 0) OR (number >= 25) AS value,
number AS time,
exponentialMovingAverage(10)(value, time) OVER (Rows BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS exp_smooth
FROM numbers(50)
)
Result: | {"source_file": "exponentialmovingaverage.md"} | [
-0.10004518926143646,
0.010479414835572243,
-0.064855195581913,
0.041349366307258606,
-0.012248622253537178,
-0.11826612055301666,
0.03704467788338661,
0.09853663295507431,
-0.001972064608708024,
-0.005717617925256491,
0.05608930066227913,
-0.07583284378051758,
0.0434933565557003,
-0.00212... |
8c8d3c5b-cd56-4d8c-99ba-b3e62fb6a698 | text
ββvalueββ¬βtimeββ¬βround(exp_smooth, 3)ββ¬βbarβββββββββββββββββββββββββββββββββββββββββ
β 1 β 0 β 0.067 β ββββ β
β 0 β 1 β 0.062 β βββ β
β 0 β 2 β 0.058 β βββ β
β 0 β 3 β 0.054 β βββ β
β 0 β 4 β 0.051 β βββ β
β 0 β 5 β 0.047 β βββ β
β 0 β 6 β 0.044 β βββ β
β 0 β 7 β 0.041 β ββ β
β 0 β 8 β 0.038 β ββ β
β 0 β 9 β 0.036 β ββ β
β 0 β 10 β 0.033 β ββ β
β 0 β 11 β 0.031 β ββ β
β 0 β 12 β 0.029 β ββ β
β 0 β 13 β 0.027 β ββ β
β 0 β 14 β 0.025 β ββ β
β 0 β 15 β 0.024 β ββ β
β 0 β 16 β 0.022 β β β
β 0 β 17 β 0.021 β β β
β 0 β 18 β 0.019 β β β
β 0 β 19 β 0.018 β β β
β 0 β 20 β 0.017 β β β
β 0 β 21 β 0.016 β β β
β 0 β 22 β 0.015 β β β
β 0 β 23 β 0.014 β β β
β 0 β 24 β 0.013 β β β
β 1 β 25 β 0.079 β ββββ β
β 1 β 26 β 0.14 β βββββββ β
β 1 β 27 β 0.198 β ββββββββββ β
β 1 β 28 β 0.252 β βββββββββββββ β
β 1 β 29 β 0.302 β βββββββββββββββ β
β 1 β 30 β 0.349 β ββββββββββββββββββ β
β 1 β 31 β 0.392 β ββββββββββββββββββββ β
β 1 β 32 β 0.433 β ββββββββββββββββββββββ β
β 1 β 33 β 0.471 β ββββββββββββββββββββββββ β | {"source_file": "exponentialmovingaverage.md"} | [
-0.027695782482624054,
-0.002298541134223342,
-0.03854485973715782,
0.019989751279354095,
-0.03384868800640106,
-0.10878176242113113,
0.10489727556705475,
-0.03768517076969147,
-0.03664698079228401,
0.04698941856622696,
0.09636275470256805,
-0.033348631113767624,
0.018311018124222755,
-0.0... |
c1a59c83-e2c5-49d4-b4bb-3e7deb8cef0d | β 1 β 32 β 0.433 β ββββββββββββββββββββββ β
β 1 β 33 β 0.471 β ββββββββββββββββββββββββ β
β 1 β 34 β 0.506 β ββββββββββββββββββββββββββ β
β 1 β 35 β 0.539 β βββββββββββββββββββββββββββ β
β 1 β 36 β 0.57 β βββββββββββββββββββββββββββββ β
β 1 β 37 β 0.599 β ββββββββββββββββββββββββββββββ β
β 1 β 38 β 0.626 β ββββββββββββββββββββββββββββββββ β
β 1 β 39 β 0.651 β βββββββββββββββββββββββββββββββββ β
β 1 β 40 β 0.674 β ββββββββββββββββββββββββββββββββββ β
β 1 β 41 β 0.696 β βββββββββββββββββββββββββββββββββββ β
β 1 β 42 β 0.716 β ββββββββββββββββββββββββββββββββββββ β
β 1 β 43 β 0.735 β βββββββββββββββββββββββββββββββββββββ β
β 1 β 44 β 0.753 β ββββββββββββββββββββββββββββββββββββββ β
β 1 β 45 β 0.77 β βββββββββββββββββββββββββββββββββββββββ β
β 1 β 46 β 0.785 β ββββββββββββββββββββββββββββββββββββββββ β
β 1 β 47 β 0.8 β ββββββββββββββββββββββββββββββββββββββββ β
β 1 β 48 β 0.813 β βββββββββββββββββββββββββββββββββββββββββ β
β 1 β 49 β 0.825 β ββββββββββββββββββββββββββββββββββββββββββ β
βββββββββ΄βββββββ΄βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "exponentialmovingaverage.md"} | [
0.008792988024652004,
0.014759434387087822,
-0.05452706664800644,
-0.01829824410378933,
-0.04318508133292198,
-0.07447654008865356,
0.06624894589185715,
-0.051525626331567764,
-0.05011564865708351,
0.1044921800494194,
0.014298651367425919,
-0.04397069290280342,
0.07646362483501434,
-0.0125... |
65431da0-7318-42d4-9728-3ffd5000599a | ```sql
CREATE TABLE data
ENGINE = Memory AS
SELECT
10 AS value,
toDateTime('2020-01-01') + (3600 * number) AS time
FROM numbers_mt(10);
-- Calculate timeunit using intDiv
SELECT
value,
time,
exponentialMovingAverage(1)(value, intDiv(toUInt32(time), 3600)) OVER (ORDER BY time ASC) AS res,
intDiv(toUInt32(time), 3600) AS timeunit
FROM data
ORDER BY time ASC;
ββvalueββ¬ββββββββββββββββtimeββ¬βββββββββresββ¬βtimeunitββ
β 10 β 2020-01-01 00:00:00 β 5 β 438288 β
β 10 β 2020-01-01 01:00:00 β 7.5 β 438289 β
β 10 β 2020-01-01 02:00:00 β 8.75 β 438290 β
β 10 β 2020-01-01 03:00:00 β 9.375 β 438291 β
β 10 β 2020-01-01 04:00:00 β 9.6875 β 438292 β
β 10 β 2020-01-01 05:00:00 β 9.84375 β 438293 β
β 10 β 2020-01-01 06:00:00 β 9.921875 β 438294 β
β 10 β 2020-01-01 07:00:00 β 9.9609375 β 438295 β
β 10 β 2020-01-01 08:00:00 β 9.98046875 β 438296 β
β 10 β 2020-01-01 09:00:00 β 9.990234375 β 438297 β
βββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββ΄βββββββββββ
-- Calculate timeunit using toRelativeHourNum
SELECT
value,
time,
exponentialMovingAverage(1)(value, toRelativeHourNum(time)) OVER (ORDER BY time ASC) AS res,
toRelativeHourNum(time) AS timeunit
FROM data
ORDER BY time ASC;
ββvalueββ¬ββββββββββββββββtimeββ¬βββββββββresββ¬βtimeunitββ
β 10 β 2020-01-01 00:00:00 β 5 β 438288 β
β 10 β 2020-01-01 01:00:00 β 7.5 β 438289 β
β 10 β 2020-01-01 02:00:00 β 8.75 β 438290 β
β 10 β 2020-01-01 03:00:00 β 9.375 β 438291 β
β 10 β 2020-01-01 04:00:00 β 9.6875 β 438292 β
β 10 β 2020-01-01 05:00:00 β 9.84375 β 438293 β
β 10 β 2020-01-01 06:00:00 β 9.921875 β 438294 β
β 10 β 2020-01-01 07:00:00 β 9.9609375 β 438295 β
β 10 β 2020-01-01 08:00:00 β 9.98046875 β 438296 β
β 10 β 2020-01-01 09:00:00 β 9.990234375 β 438297 β
βββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββ΄βββββββββββ
``` | {"source_file": "exponentialmovingaverage.md"} | [
-0.020547248423099518,
-0.008887499570846558,
-0.029432864859700203,
0.03628590330481529,
-0.05414724722504616,
-0.04317663982510567,
-0.013765309937298298,
0.023477137088775635,
0.004913045559078455,
0.019114913418889046,
0.051063962280750275,
-0.059008028358221054,
-0.011610418558120728,
... |
b5c211d8-3c96-413b-a356-60a58911d9e4 | description: 'Aggregate function for re-sampling time series data for PromQL-like irate and idelta calculation'
sidebar_position: 224
slug: /sql-reference/aggregate-functions/reference/timeSeriesLastTwoSamples
title: 'timeSeriesLastTwoSamples'
doc_type: 'reference'
Aggregate function that takes time series data as pairs of timestamps and values and stores only at most 2 recent samples.
Arguments:
-
timestamp
- timestamp of the sample
-
value
- value of the time series corresponding to the
timestamp
Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size.
Return value:
A
Tuple(Array(DateTime), Array(Float64))
- a pair of arrays of equal length from 0 to 2. The first array contains the timestamps of sampled time series, the second array contains the corresponding values of the time series.
Example:
This aggregate function is intended to be used with a Materialized View and Aggregated table that stores re-sampled time series data for grid-aligned timestamps.
Consider the following example table for raw data, and a table for storing re-sampled data:
```sql
-- Table for raw data
CREATE TABLE t_raw_timeseries
(
metric_id UInt64,
timestamp DateTime64(3, 'UTC') CODEC(DoubleDelta, ZSTD),
value Float64 CODEC(DoubleDelta)
)
ENGINE = MergeTree()
ORDER BY (metric_id, timestamp);
-- Table with data re-sampled to bigger (15 sec) time steps
CREATE TABLE t_resampled_timeseries_15_sec
(
metric_id UInt64,
grid_timestamp DateTime('UTC') CODEC(DoubleDelta, ZSTD), -- Timestamp aligned to 15 sec
samples AggregateFunction(timeSeriesLastTwoSamples, DateTime64(3, 'UTC'), Float64)
)
ENGINE = AggregatingMergeTree()
ORDER BY (metric_id, grid_timestamp);
-- MV for populating re-sampled table
CREATE MATERIALIZED VIEW mv_resampled_timeseries TO t_resampled_timeseries_15_sec
(
metric_id UInt64,
grid_timestamp DateTime('UTC') CODEC(DoubleDelta, ZSTD),
samples AggregateFunction(timeSeriesLastTwoSamples, DateTime64(3, 'UTC'), Float64)
)
AS SELECT
metric_id,
ceil(toUnixTimestamp(timestamp + interval 999 millisecond) / 15, 0) * 15 AS grid_timestamp, -- Round timestamp up to the next grid point
initializeAggregation('timeSeriesLastTwoSamplesState', timestamp, value) AS samples
FROM t_raw_timeseries
ORDER BY metric_id, grid_timestamp;
```
Insert some test data and read the data between '2024-12-12 12:00:12' and '2024-12-12 12:00:30'
```sql
-- Insert some data
INSERT INTO t_raw_timeseries(metric_id, timestamp, value) SELECT number%10 AS metric_id, '2024-12-12 12:00:00'::DateTime64(3, 'UTC') + interval ((number/10)%100)*900 millisecond as timestamp, number%3+number%29 AS value FROM numbers(1000);
-- Check raw data
SELECT *
FROM t_raw_timeseries
WHERE metric_id = 3 AND timestamp BETWEEN '2024-12-12 12:00:12' AND '2024-12-12 12:00:31'
ORDER BY metric_id, timestamp;
``` | {"source_file": "timeSeriesLastTwoSamples.md"} | [
-0.10218439251184464,
-0.012889198958873749,
-0.0518815852701664,
0.016640443354845047,
-0.050884976983070374,
-0.030659131705760956,
0.013167859055101871,
0.02700202725827694,
0.01853296160697937,
0.023368800058960915,
-0.04345684498548508,
-0.03467300906777382,
-0.022217508405447006,
-0.... |
4d678779-92d2-4308-921b-1c6c8fce1eb2 | -- Check raw data
SELECT *
FROM t_raw_timeseries
WHERE metric_id = 3 AND timestamp BETWEEN '2024-12-12 12:00:12' AND '2024-12-12 12:00:31'
ORDER BY metric_id, timestamp;
```
response
3 2024-12-12 12:00:12.870 29
3 2024-12-12 12:00:13.770 8
3 2024-12-12 12:00:14.670 19
3 2024-12-12 12:00:15.570 30
3 2024-12-12 12:00:16.470 9
3 2024-12-12 12:00:17.370 20
3 2024-12-12 12:00:18.270 2
3 2024-12-12 12:00:19.170 10
3 2024-12-12 12:00:20.070 21
3 2024-12-12 12:00:20.970 3
3 2024-12-12 12:00:21.870 11
3 2024-12-12 12:00:22.770 22
3 2024-12-12 12:00:23.670 4
3 2024-12-12 12:00:24.570 12
3 2024-12-12 12:00:25.470 23
3 2024-12-12 12:00:26.370 5
3 2024-12-12 12:00:27.270 13
3 2024-12-12 12:00:28.170 24
3 2024-12-12 12:00:29.069 6
3 2024-12-12 12:00:29.969 14
3 2024-12-12 12:00:30.869 25
Query last 2 sample for timestamps '2024-12-12 12:00:15' and '2024-12-12 12:00:30':
sql
-- Check re-sampled data
SELECT metric_id, grid_timestamp, (finalizeAggregation(samples).1 as timestamp, finalizeAggregation(samples).2 as value)
FROM t_resampled_timeseries_15_sec
WHERE metric_id = 3 AND grid_timestamp BETWEEN '2024-12-12 12:00:15' AND '2024-12-12 12:00:30'
ORDER BY metric_id, grid_timestamp;
response
3 2024-12-12 12:00:15 (['2024-12-12 12:00:14.670','2024-12-12 12:00:13.770'],[19,8])
3 2024-12-12 12:00:30 (['2024-12-12 12:00:29.969','2024-12-12 12:00:29.069'],[14,6])
The aggregated table stores only last 2 values for each 15-second aligned timestamp. This allows to calculate PromQL-like
irate
and
idelta
by reading much less data then is stored in the raw table.
sql
-- Calculate idelta and irate from the raw data
WITH
'2024-12-12 12:00:15'::DateTime64(3,'UTC') AS start_ts, -- start of timestamp grid
start_ts + INTERVAL 60 SECOND AS end_ts, -- end of timestamp grid
15 AS step_seconds, -- step of timestamp grid
45 AS window_seconds -- "staleness" window
SELECT
metric_id,
timeSeriesInstantDeltaToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value),
timeSeriesInstantRateToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)
FROM t_raw_timeseries
WHERE metric_id = 3 AND timestamp BETWEEN start_ts - interval window_seconds seconds AND end_ts
GROUP BY metric_id;
response
3 [11,8,-18,8,11] [12.222222222222221,8.88888888888889,1.1111111111111112,8.88888888888889,12.222222222222221] | {"source_file": "timeSeriesLastTwoSamples.md"} | [
0.011954485438764095,
0.0009220971260219812,
0.061837248504161835,
-0.015908969566226006,
-0.030644694343209267,
-0.042105548083782196,
-0.011005272157490253,
-0.023114336654543877,
0.04646824300289154,
0.01684531755745411,
0.012878633104264736,
-0.049035780131816864,
-0.058162640780210495,
... |
6e86b69a-d1ca-4da0-ac58-1edc86795060 | response
3 [11,8,-18,8,11] [12.222222222222221,8.88888888888889,1.1111111111111112,8.88888888888889,12.222222222222221]
sql
-- Calculate idelta and irate from the re-sampled data
WITH
'2024-12-12 12:00:15'::DateTime64(3,'UTC') AS start_ts, -- start of timestamp grid
start_ts + INTERVAL 60 SECOND AS end_ts, -- end of timestamp grid
15 AS step_seconds, -- step of timestamp grid
45 AS window_seconds -- "staleness" window
SELECT
metric_id,
timeSeriesInstantDeltaToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamps, values),
timeSeriesInstantRateToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamps, values)
FROM (
SELECT
metric_id,
finalizeAggregation(samples).1 AS timestamps,
finalizeAggregation(samples).2 AS values
FROM t_resampled_timeseries_15_sec
WHERE metric_id = 3 AND grid_timestamp BETWEEN start_ts - interval window_seconds seconds AND end_ts
)
GROUP BY metric_id;
response
3 [11,8,-18,8,11] [12.222222222222221,8.88888888888889,1.1111111111111112,8.88888888888889,12.222222222222221]
:::note
This function is experimental, enable it by setting
allow_experimental_ts_to_grid_aggregate_function=true
.
::: | {"source_file": "timeSeriesLastTwoSamples.md"} | [
-0.05985187739133835,
0.03578932583332062,
0.009771807119250298,
0.01924918219447136,
-0.02827012911438942,
-0.011425737291574478,
0.030151741579174995,
0.0478023886680603,
0.05171212553977966,
0.012882162816822529,
-0.04140844568610191,
-0.07815771549940109,
-0.03977758809924126,
-0.04744... |
c6bbb6a5-7e51-4c20-945f-3f2c946b39f9 | description: 'Calculates the approximate number of different argument values. It is
the same as uniqCombined, but uses a 64-bit hash for all data types rather than
just for the String data type.'
sidebar_position: 206
slug: /sql-reference/aggregate-functions/reference/uniqcombined64
title: 'uniqCombined64'
doc_type: 'reference'
uniqCombined64
Calculates the approximate number of different argument values. It is the same as
uniqCombined
, but uses a 64-bit hash for all data types rather than just for the String data type.
sql
uniqCombined64(HLL_precision)(x[, ...])
Parameters
HLL_precision
: The base-2 logarithm of the number of cells in
HyperLogLog
. Optionally, you can use the function as
uniqCombined64(x[, ...])
. The default value for
HLL_precision
is 17, which is effectively 96 KiB of space (2^17 cells, 6 bits each).
X
: A variable number of parameters. Parameters can be
Tuple
,
Array
,
Date
,
DateTime
,
String
, or numeric types.
Returned value
A number
UInt64
-type number.
Implementation details
The
uniqCombined64
function:
- Calculates a hash (64-bit hash for all data types) for all parameters in the aggregate, then uses it in calculations.
- Uses a combination of three algorithms: array, hash table, and HyperLogLog with an error correction table.
- For a small number of distinct elements, an array is used.
- When the set size is larger, a hash table is used.
- For a larger number of elements, HyperLogLog is used, which will occupy a fixed amount of memory.
- Provides the result deterministically (it does not depend on the query processing order).
:::note
Since it uses 64-bit hash for all types, the result does not suffer from very high error for cardinalities significantly larger than
UINT_MAX
like
uniqCombined
does, which uses a 32-bit hash for non-
String
types.
:::
Compared to the
uniq
function, the
uniqCombined64
function:
Consumes several times less memory.
Calculates with several times higher accuracy.
Example
In the example below
uniqCombined64
is run on
1e10
different numbers returning a very close approximation of the number of different argument values.
Query:
sql
SELECT uniqCombined64(number) FROM numbers(1e10);
Result:
response
ββuniqCombined64(number)ββ
β 9998568925 β -- 10.00 billion
ββββββββββββββββββββββββββ
By comparison the
uniqCombined
function returns a rather poor approximation for an input this size.
Query:
sql
SELECT uniqCombined(number) FROM numbers(1e10);
Result:
response
ββuniqCombined(number)ββ
β 5545308725 β -- 5.55 billion
ββββββββββββββββββββββββ
See Also
uniq
uniqCombined
uniqHLL12
uniqExact
uniqTheta | {"source_file": "uniqcombined64.md"} | [
0.025689946487545967,
0.015098579227924347,
-0.056763071566820145,
0.028388725593686104,
-0.08340208977460861,
-0.014427751302719116,
0.07034268230199814,
0.020888574421405792,
0.003467763541266322,
-0.012477120384573936,
0.015508666634559631,
0.00610368512570858,
0.07348863035440445,
-0.1... |
e4dbe0d4-3e96-4925-9ee8-9f62c3ba04ed | description: 'Applies mean z-test to samples from two populations.'
sidebar_label: 'meanZTest'
sidebar_position: 166
slug: /sql-reference/aggregate-functions/reference/meanztest
title: 'meanZTest'
doc_type: 'reference'
meanZTest
Applies mean z-test to samples from two populations.
Syntax
sql
meanZTest(population_variance_x, population_variance_y, confidence_level)(sample_data, sample_index)
Values of both samples are in the
sample_data
column. If
sample_index
equals to 0 then the value in that row belongs to the sample from the first population. Otherwise it belongs to the sample from the second population.
The null hypothesis is that means of populations are equal. Normal distribution is assumed. Populations may have unequal variance and the variances are known.
Arguments
sample_data
β Sample data.
Integer
,
Float
or
Decimal
.
sample_index
β Sample index.
Integer
.
Parameters
population_variance_x
β Variance for population x.
Float
.
population_variance_y
β Variance for population y.
Float
.
confidence_level
β Confidence level in order to calculate confidence intervals.
Float
.
Returned values
Tuple
with four elements:
calculated t-statistic.
Float64
.
calculated p-value.
Float64
.
calculated confidence-interval-low.
Float64
.
calculated confidence-interval-high.
Float64
.
Example
Input table:
text
ββsample_dataββ¬βsample_indexββ
β 20.3 β 0 β
β 21.9 β 0 β
β 22.1 β 0 β
β 18.9 β 1 β
β 19 β 1 β
β 20.3 β 1 β
βββββββββββββββ΄βββββββββββββββ
Query:
sql
SELECT meanZTest(0.7, 0.45, 0.95)(sample_data, sample_index) FROM mean_ztest
Result:
text
ββmeanZTest(0.7, 0.45, 0.95)(sample_data, sample_index)βββββββββββββββββββββββββββββ
β (3.2841296025548123,0.0010229786769086013,0.8198428246768334,3.2468238419898365) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "meanztest.md"} | [
-0.00801396556198597,
0.03475669398903847,
-0.023245561867952347,
0.07033739238977432,
-0.02828972041606903,
-0.07364910840988159,
0.0029732456896454096,
0.08706093579530716,
-0.11861670762300491,
-0.009950446896255016,
0.030899345874786377,
-0.11653643101453781,
0.09873072057962418,
-0.09... |
4475bec2-b9c2-4b6f-afb7-f54c9e963cdd | description: 'Applies the one-sample Student t-test to a sample and a known population mean.'
sidebar_label: 'studentTTestOneSample'
sidebar_position: 195
slug: /sql-reference/aggregate-functions/reference/studentttestonesample
title: 'studentTTestOneSample'
doc_type: 'reference'
studentTTestOneSample
Applies the one-sample Student's t-test to determine whether the mean of a sample differs from a known population mean.
Normality is assumed. The null hypothesis is that the sample mean equals the population mean.
Syntax
sql
studentTTestOneSample([confidence_level])(sample_data, population_mean)
The optional
confidence_level
enables confidence interval calculation.
Arguments
sample_data
β Sample data. Integer, Float or Decimal.
population_mean
β Known population mean to test against. Integer, Float or Decimal (usually a constant).
Parameters
confidence_level
β Confidence level for confidence intervals. Float in (0, 1).
Notes:
- At least 2 observations are required; otherwise the result is
(nan, nan)
(and intervals if requested are
nan
).
- Constant or near-constant input will also return
nan
due to zero (or effectively zero) standard error.
Returned values
Tuple
with two or four elements (if
confidence_level
is specified):
calculated t-statistic. Float64.
calculated p-value (two-tailed). Float64.
calculated confidence-interval-low. Float64. (optional)
calculated confidence-interval-high. Float64. (optional)
Confidence intervals are for the sample mean at the given confidence level.
Examples
Input table:
text
ββvalueββ
β 20.3 β
β 21.1 β
β 21.7 β
β 19.9 β
β 21.8 β
βββββββββ
Without confidence interval:
sql
SELECT studentTTestOneSample()(value, 20.0) FROM t;
-- or simply
SELECT studentTTestOneSample(value, 20.0) FROM t;
With confidence interval (95%):
sql
SELECT studentTTestOneSample(0.95)(value, 20.0) FROM t;
See Also
Student's t-test
studentTTest function | {"source_file": "studentttestonesample.md"} | [
0.0027456304524093866,
0.025156928226351738,
0.0002511051425244659,
-0.001836569863371551,
-0.0433068610727787,
-0.07339217513799667,
-0.003019633935764432,
0.10014744848012924,
-0.06686432659626007,
0.05519011989235878,
0.0727202296257019,
-0.15101635456085205,
0.08462489396333694,
-0.113... |
08110071-5624-4e0d-8ac8-cd93ed2aafb2 | description: 'Calculates the approximate number of different argument values, using
the HyperLogLog algorithm.'
sidebar_position: 208
slug: /sql-reference/aggregate-functions/reference/uniqhll12
title: 'uniqHLL12'
doc_type: 'reference'
uniqHLL12
Calculates the approximate number of different argument values, using the
HyperLogLog
algorithm.
sql
uniqHLL12(x[, ...])
Arguments
The function takes a variable number of parameters. Parameters can be
Tuple
,
Array
,
Date
,
DateTime
,
String
, or numeric types.
Returned value
A
UInt64
-type number.
Implementation details
Function:
Calculates a hash for all parameters in the aggregate, then uses it in calculations.
Uses the HyperLogLog algorithm to approximate the number of different argument values.
2^12 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements).
Provides the determinate result (it does not depend on the query processing order).
We do not recommend using this function. In most cases, use the
uniq
or
uniqCombined
function.
See Also
uniq
uniqCombined
uniqExact
uniqTheta | {"source_file": "uniqhll12.md"} | [
0.014404946938157082,
0.024934202432632446,
-0.06718271225690842,
0.004210681188851595,
-0.052435457706451416,
-0.07580527663230896,
0.04187455028295517,
0.01857604645192623,
-0.00460209371522069,
-0.021332159638404846,
-0.04645150154829025,
0.019067542627453804,
0.08832468092441559,
-0.08... |
8a37326b-86f6-4a36-bf1f-f1239d2fc10f | description: 'Aggregates arrays into a larger array of those arrays.'
keywords: ['groupArrayArray', 'array_concat_agg']
sidebar_position: 111
slug: /sql-reference/aggregate-functions/reference/grouparrayarray
title: 'groupArrayArray'
doc_type: 'reference'
groupArrayArray
Aggregates arrays into a larger array of those arrays.
Combines the
groupArray
function with the
Array
combinator.
Alias:
array_concat_agg
Example
We have data which captures user browsing sessions. Each session records the sequence of pages a specific user visited.
We can use the
groupArrayArray
function to analyze the patterns of page visits for each user.
```sql title="Setup"
CREATE TABLE website_visits (
user_id UInt32,
session_id UInt32,
page_visits Array(String)
) ENGINE = Memory;
INSERT INTO website_visits VALUES
(101, 1, ['homepage', 'products', 'checkout']),
(101, 2, ['search', 'product_details', 'contact']),
(102, 1, ['homepage', 'about_us']),
(101, 3, ['blog', 'homepage']),
(102, 2, ['products', 'product_details', 'add_to_cart', 'checkout']);
```
sql title="Query"
SELECT
user_id,
groupArrayArray(page_visits) AS user_session_page_sequences
FROM website_visits
GROUP BY user_id;
sql title="Response"
ββuser_idββ¬βuser_session_page_sequencesββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1. β 101 β ['homepage','products','checkout','search','product_details','contact','blog','homepage'] β
2. β 102 β ['homepage','about_us','products','product_details','add_to_cart','checkout'] β
βββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "grouparrayarray.md"} | [
0.04244811460375786,
0.026877116411924362,
-0.047413621097803116,
0.029529273509979248,
-0.09833292663097382,
-0.04087619110941887,
0.10576387494802475,
0.004988350439816713,
-0.046876534819602966,
-0.033861029893159866,
-0.0010895855957642198,
0.020367557182908058,
0.04388098418712616,
-0... |
b607cdb7-7fe7-4b3b-b28a-8245bc33b7b4 | description: 'Creates an array from different argument values.'
sidebar_position: 154
slug: /sql-reference/aggregate-functions/reference/groupuniqarray
title: 'groupUniqArray'
doc_type: 'reference'
groupUniqArray
Syntax:
groupUniqArray(x)
or
groupUniqArray(max_size)(x)
Creates an array from different argument values. Memory consumption is the same as for the
uniqExact
function.
The second version (with the
max_size
parameter) limits the size of the resulting array to
max_size
elements.
For example,
groupUniqArray(1)(x)
is equivalent to
[any(x)]
. | {"source_file": "groupuniqarray.md"} | [
0.04520116373896599,
-0.008835851214826107,
-0.07876891642808914,
0.0667763352394104,
-0.03758309409022331,
-0.0026958726812154055,
0.026678018271923065,
0.0068663847632706165,
0.030417470261454582,
-0.019461356103420258,
0.03309252858161926,
0.07581175863742828,
0.06535076349973679,
-0.10... |
8083f3cf-7bdb-45b8-9dad-ad226e2d441d | description: 'Applies bit-wise
AND
for series of numbers.'
sidebar_position: 147
slug: /sql-reference/aggregate-functions/reference/groupbitand
title: 'groupBitAnd'
doc_type: 'reference'
groupBitAnd
Applies bit-wise
AND
for series of numbers.
sql
groupBitAnd(expr)
Arguments
expr
β An expression that results in
UInt*
or
Int*
type.
Return value
Value of the
UInt*
or
Int*
type.
Example
Test data:
text
binary decimal
00101100 = 44
00011100 = 28
00001101 = 13
01010101 = 85
Query:
sql
SELECT groupBitAnd(num) FROM t
Where
num
is the column with the test data.
Result:
text
binary decimal
00000100 = 4 | {"source_file": "groupbitand.md"} | [
0.03370784968137741,
0.05512525513768196,
-0.07679498195648193,
0.042900968343019485,
-0.0831657126545906,
-0.022115956991910934,
0.05583437159657478,
0.04709332808852196,
-0.021984416991472244,
-0.01775587722659111,
0.004399976693093777,
-0.0680803507566452,
0.03915483504533768,
-0.081599... |
c38e7324-3642-4fb2-b956-91c4a667aaa0 | description: 'Sums the arithmetic difference between consecutive rows.'
sidebar_position: 129
slug: /sql-reference/aggregate-functions/reference/deltasum
title: 'deltaSum'
doc_type: 'reference'
deltaSum
Sums the arithmetic difference between consecutive rows. If the difference is negative, it is ignored.
:::note
The underlying data must be sorted for this function to work properly. If you would like to use this function in a
materialized view
, you most likely want to use the
deltaSumTimestamp
method instead.
:::
Syntax
sql
deltaSum(value)
Arguments
value
β Input values, must be
Integer
or
Float
type.
Returned value
A gained arithmetic difference of the
Integer
or
Float
type.
Examples
Query:
sql
SELECT deltaSum(arrayJoin([1, 2, 3]));
Result:
text
ββdeltaSum(arrayJoin([1, 2, 3]))ββ
β 2 β
ββββββββββββββββββββββββββββββββββ
Query:
sql
SELECT deltaSum(arrayJoin([1, 2, 3, 0, 3, 4, 2, 3]));
Result:
text
ββdeltaSum(arrayJoin([1, 2, 3, 0, 3, 4, 2, 3]))ββ
β 7 β
βββββββββββββββββββββββββββββββββββββββββββββββββ
Query:
sql
SELECT deltaSum(arrayJoin([2.25, 3, 4.5]));
Result:
text
ββdeltaSum(arrayJoin([2.25, 3, 4.5]))ββ
β 2.25 β
βββββββββββββββββββββββββββββββββββββββ
See Also {#see-also}
runningDifference | {"source_file": "deltasum.md"} | [
-0.052733127027750015,
0.015025570057332516,
0.021743860095739365,
-0.037390146404504776,
-0.11026731878519058,
0.00583918672055006,
0.057514794170856476,
0.007789826951920986,
0.012845173478126526,
0.000022607380742556415,
0.023684555664658546,
-0.057266250252723694,
0.011231588199734688,
... |
bd74b0ae-377c-46cb-9d12-1b88d7ceb3a7 | description: 'Returns an array with the first N items in ascending order.'
sidebar_position: 146
slug: /sql-reference/aggregate-functions/reference/grouparraysorted
title: 'groupArraySorted'
doc_type: 'reference'
groupArraySorted
Returns an array with the first N items in ascending order.
sql
groupArraySorted(N)(column)
Arguments
N
β The number of elements to return.
column
β The value (Integer, String, Float and other Generic types).
Example
Gets the first 10 numbers:
sql
SELECT groupArraySorted(10)(number) FROM numbers(100)
text
ββgroupArraySorted(10)(number)ββ
β [0,1,2,3,4,5,6,7,8,9] β
ββββββββββββββββββββββββββββββββ
Gets all the String implementations of all numbers in column:
sql
SELECT groupArraySorted(5)(str) FROM (SELECT toString(number) AS str FROM numbers(5));
text
ββgroupArraySorted(5)(str)ββ
β ['0','1','2','3','4'] β
ββββββββββββββββββββββββββββ | {"source_file": "grouparraysorted.md"} | [
0.01549512054771185,
0.009044033475220203,
-0.009039729833602905,
0.003224351443350315,
-0.04612303152680397,
0.005752816330641508,
0.048605646938085556,
-0.002892332384362817,
-0.008943911641836166,
-0.015813080593943596,
-0.012376219034194946,
0.11393067240715027,
0.013053420931100845,
-... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.