id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
9bd98fea-1c55-4919-a2b5-e21a784bf8ac | description: 'An extension to the hudi table function. Allows processing files from
Apache Hudi tables in Amazon S3 in parallel with many nodes in a specified cluster.'
sidebar_label: 'hudiCluster'
sidebar_position: 86
slug: /sql-reference/table-functions/hudiCluster
title: 'hudiCluster Table Function'
doc_type: 'reference'
hudiCluster Table Function
This is an extension to the
hudi
table function.
Allows processing files from Apache
Hudi
tables in Amazon S3 in parallel with many nodes in a specified cluster. On initiator it creates a connection to all nodes in the cluster and dispatches each file dynamically. On the worker node it asks the initiator about the next task to process and processes it. This is repeated until all tasks are finished.
Syntax {#syntax}
sql
hudiCluster(cluster_name, url [,aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression])
Arguments {#arguments} | {"source_file": "hudiCluster.md"} | [
-0.018262669444084167,
-0.030575325712561607,
-0.1317782700061798,
0.05090489983558655,
0.06148189678788185,
-0.07326406240463257,
-0.01892757974565029,
-0.05372310057282448,
-0.041960667818784714,
0.028948185965418816,
0.05304761603474617,
0.015293086878955364,
0.0572860911488533,
-0.1575... |
a8479842-bb5c-4a14-8aeb-4dea7fc96806 | | Argument | Description |
|----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
cluster_name
| Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers. |
|
url
| Bucket url with the path to an existing Hudi table in S3. |
|
aws_access_key_id
,
aws_secret_access_key
| Long-term credentials for the
AWS
account user. You can use these to authenticate your requests. These parameters are optional. If credentials are not specified, they are used from the ClickHouse configuration. For more information see
Using S3 for Data Storage
. |
|
format
| The
format
of the file. |
|
structure
| Structure of the table. Format
'column1_name column1_type, column2_name column2_type, ...'
. |
|
compression
| Parameter is optional. Supported values:
none
,
gzip/gz
,
brotli/br
,
xz/LZMA
,
zstd/zst | {"source_file": "hudiCluster.md"} | [
0.027868228033185005,
0.09311798959970474,
-0.03671836853027344,
-0.03408096358180046,
-0.058706168085336685,
0.019149260595440865,
0.03263988718390465,
0.04810817167162895,
0.04606948420405388,
-0.05770222842693329,
0.011740010231733322,
-0.04398162662982941,
0.00020942941773682833,
-0.03... |
6bb1202e-5c90-4766-b001-ac1b184d20a5 | |
compression
| Parameter is optional. Supported values:
none
,
gzip/gz
,
brotli/br
,
xz/LZMA
,
zstd/zst
. By default, compression will be autodetected by the file extension. | | {"source_file": "hudiCluster.md"} | [
-0.06592731922864914,
0.1266450434923172,
-0.12161040306091309,
0.013599012978374958,
0.08750125765800476,
0.008525275625288486,
-0.007867097854614258,
0.02465568296611309,
-0.06047913059592247,
-0.003186460817232728,
-0.0006585583905689418,
0.029792368412017822,
0.0033148513175547123,
0.0... |
7a880431-bc99-40e2-bf72-5adbf7524152 | Returned value {#returned_value}
A table with the specified structure for reading data from cluster in the specified Hudi table in S3.
Virtual Columns {#virtual-columns}
_path
β Path to the file. Type:
LowCardinality(String)
.
_file
β Name of the file. Type:
LowCardinality(String)
.
_size
β Size of the file in bytes. Type:
Nullable(UInt64)
. If the file size is unknown, the value is
NULL
.
_time
β Last modified time of the file. Type:
Nullable(DateTime)
. If the time is unknown, the value is
NULL
.
_etag
β The etag of the file. Type:
LowCardinality(String)
. If the etag is unknown, the value is
NULL
.
Related {#related}
Hudi engine
Hudi table function | {"source_file": "hudiCluster.md"} | [
-0.003470450174063444,
0.03509821742773056,
-0.14115512371063232,
0.05787272751331329,
0.08729736506938934,
-0.030309228226542473,
-0.0026559995021671057,
0.023683523759245872,
-0.011917476542294025,
-0.02839376963675022,
0.10856592655181885,
-0.0024983983021229506,
-0.0008702123304829001,
... |
c5bfd01e-3a79-48d4-a4da-8f0460d57a45 | slug: /sql-reference/table-functions/generate_series
sidebar_position: 146
sidebar_label: 'generate_series'
title: 'generate_series (generateSeries)'
description: 'Returns a table with the single
generate_series
column (UInt64) that contains integers from start to stop inclusively.'
doc_type: 'reference'
generate_series Table Function
Alias:
generateSeries
Syntax {#syntax}
Returns a table with the single 'generate_series' column (
UInt64
) that contains integers from start to stop inclusively:
sql
generate_series(START, STOP)
Returns a table with the single 'generate_series' column (
UInt64
) that contains integers from start to stop inclusively with spacing between values given by
STEP
:
sql
generate_series(START, STOP, STEP)
Examples {#examples}
The following queries return tables with the same content but different column names:
sql
SELECT * FROM numbers(10, 5);
SELECT * FROM generate_series(10, 14);
And the following queries return tables with the same content but different column names (but the second option is more efficient):
sql
SELECT * FROM numbers(10, 11) WHERE number % 3 == (10 % 3);
SELECT * FROM generate_series(10, 20, 3); | {"source_file": "generate_series.md"} | [
-0.06212311238050461,
-0.007629913743585348,
-0.07026229798793793,
0.020038876682519913,
-0.06406274437904358,
-0.0063003418035805225,
-0.013081125915050507,
0.005950300954282284,
0.03383304551243782,
-0.007618342991918325,
0.02259671501815319,
-0.025369182229042053,
0.01511217001825571,
-... |
d6d94bf0-3cf9-44d6-97e4-6a701d7ad1f7 | description: 'Allows processing files from Azure Blob storage in parallel with many
nodes in a specified cluster.'
sidebar_label: 'azureBlobStorageCluster'
sidebar_position: 15
slug: /sql-reference/table-functions/azureBlobStorageCluster
title: 'azureBlobStorageCluster'
doc_type: 'reference'
azureBlobStorageCluster Table Function
Allows processing files from
Azure Blob Storage
in parallel with many nodes in a specified cluster. On initiator it creates a connection to all nodes in the cluster, discloses asterisks in S3 file path, and dispatches each file dynamically. On the worker node it asks the initiator about the next task to process and processes it. This is repeated until all tasks are finished.
This table function is similar to the
s3Cluster function
.
Syntax {#syntax}
sql
azureBlobStorageCluster(cluster_name, connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression, structure])
Arguments {#arguments} | {"source_file": "azureBlobStorageCluster.md"} | [
-0.033468976616859436,
-0.06145130842924118,
-0.1393815129995346,
0.05997748672962189,
-0.010019482113420963,
0.005757797043770552,
0.03367083519697189,
-0.050244297832250595,
0.024815598502755165,
0.0935368463397026,
0.002632511081174016,
0.0103943832218647,
0.0762859582901001,
-0.0352178... |
935e02c3-e203-4b48-9597-9760f8c02ad2 | | Argument | Description |
|---------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
cluster_name
| Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers. |
|
connection_string
| storage_account_url
β connection_string includes account name & key ([Create connection string](https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json#configure-a-connection-string-for-an-azure-storage-account)) or you could also provide the storage account url here and account name & account key as separate parameters (see parameters account_name & account_key) |
|
container_name | {"source_file": "azureBlobStorageCluster.md"} | [
0.027937311679124832,
0.09285847842693329,
-0.03693150728940964,
-0.03431033715605736,
-0.05899081006646156,
0.018396951258182526,
0.03254036605358124,
0.04797203838825226,
0.04546676203608513,
-0.0572921521961689,
0.012018000707030296,
-0.04449009895324707,
0.00007389639358734712,
-0.0343... |
24fc7b54-be18-4ed6-9a39-69de1f4e5f2f | |
container_name
| Container name |
|
blobpath
| file path. Supports following wildcards in readonly mode:
,
*
,
?
,
{abc,def}
and
{N..M}
where
N
,
M
β numbers,
'abc'
,
'def'
β strings. |
|
account_name
| if storage_account_url is used, then account name can be specified here |
|
account_key
| if storage_account_url is used, then account key can be specified here |
|
format | {"source_file": "azureBlobStorageCluster.md"} | [
0.06836150586605072,
0.043476469814777374,
-0.06927156448364258,
0.06018989160656929,
-0.04306984692811966,
0.0194651260972023,
0.05199325084686279,
0.05520141124725342,
-0.04026085510849953,
-0.07214479148387909,
0.00925468560308218,
0.018780449405312538,
0.05347467586398125,
0.0121834101... |
d86289a9-02c7-436c-a741-33bb33ab1281 | |
format
| The [format](/sql-reference/formats) of the file. |
|
compression
| Supported values:
none
,
gzip/gz
,
brotli/br
,
xz/LZMA
,
zstd/zst
. By default, it will autodetect compression by file extension. (same as setting to
auto
). |
|
structure
| Structure of the table. Format
'column1_name column1_type, column2_name column2_type, ...'`. | | {"source_file": "azureBlobStorageCluster.md"} | [
-0.023380208760499954,
0.04916134104132652,
-0.19277197122573853,
0.02704652026295662,
0.044896841049194336,
-0.0169720109552145,
-0.002326671965420246,
0.033485133200883865,
-0.08927492797374725,
0.05233220383524895,
-0.012998327612876892,
0.02400253154337406,
0.0531289242208004,
-0.03338... |
d12d5c3f-af08-4c67-b3d5-c7238f6b605c | Returned value {#returned_value}
A table with the specified structure for reading or writing data in the specified file.
Examples {#examples}
Similar to the
AzureBlobStorage
table engine, users can use Azurite emulator for local Azure Storage development. Further details
here
. Below we assume Azurite is available at the hostname
azurite1
.
Select the count for the file
test_cluster_*.csv
, using all the nodes in the
cluster_simple
cluster:
sql
SELECT count(*) FROM azureBlobStorageCluster(
'cluster_simple', 'http://azurite1:10000/devstoreaccount1', 'testcontainer', 'test_cluster_count.csv', 'devstoreaccount1',
'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV',
'auto', 'key UInt64')
Using Shared Access Signatures (SAS) {#using-shared-access-signatures-sas-sas-tokens}
See
azureBlobStorage
for examples.
Related {#related}
AzureBlobStorage engine
azureBlobStorage table function | {"source_file": "azureBlobStorageCluster.md"} | [
0.08679725229740143,
0.027087239548563957,
-0.15551182627677917,
0.15920037031173706,
-0.05699022486805916,
0.028417080640792847,
0.05459505692124367,
0.0528549961745739,
0.03447939455509186,
0.10365619510412216,
0.05327225103974342,
-0.07135991752147675,
0.15003813803195953,
-0.0392322242... |
580cec78-07bd-45d0-9e81-c40f53d3294f | description: 'Returns a table that is connected via JDBC driver.'
sidebar_label: 'jdbc'
sidebar_position: 100
slug: /sql-reference/table-functions/jdbc
title: 'jdbc'
doc_type: 'reference'
jdbc Table Function
:::note
clickhouse-jdbc-bridge contains experimental codes and is no longer supported. It may contain reliability issues and security vulnerabilities. Use it at your own risk.
ClickHouse recommend using built-in table functions in ClickHouse which provide a better alternative for ad-hoc querying scenarios (Postgres, MySQL, MongoDB, etc).
:::
JDBC table function returns table that is connected via JDBC driver.
This table function requires separate
clickhouse-jdbc-bridge
program to be running.
It supports Nullable types (based on DDL of remote table that is queried).
Syntax {#syntax}
sql
jdbc(datasource, external_database, external_table)
jdbc(datasource, external_table)
jdbc(named_collection)
Examples {#examples}
Instead of an external database name, a schema can be specified:
sql
SELECT * FROM jdbc('jdbc:mysql://localhost:3306/?user=root&password=root', 'schema', 'table')
sql
SELECT * FROM jdbc('mysql://localhost:3306/?user=root&password=root', 'select * from schema.table')
sql
SELECT * FROM jdbc('mysql-dev?p1=233', 'num Int32', 'select toInt32OrZero(''{{p1}}'') as num')
sql
SELECT *
FROM jdbc('mysql-dev?p1=233', 'num Int32', 'select toInt32OrZero(''{{p1}}'') as num')
sql
SELECT a.datasource AS server1, b.datasource AS server2, b.name AS db
FROM jdbc('mysql-dev?datasource_column', 'show databases') a
INNER JOIN jdbc('self?datasource_column', 'show databases') b ON a.Database = b.name | {"source_file": "jdbc.md"} | [
-0.007366587873548269,
-0.050016093999147415,
-0.08307713270187378,
0.04708503186702728,
-0.08781450241804123,
-0.036733780056238174,
0.042230021208524704,
0.060799740254879,
-0.07244769483804703,
-0.05873461812734604,
-0.018663164228200912,
-0.030275525525212288,
0.041935794055461884,
-0.... |
5795c626-71b8-4ec7-9bd6-8053e1d2edd3 | description: 'Parses data from arguments according to specified input format. If structure argument is not specified, it''s extracted from the data.'
slug: /sql-reference/table-functions/format
sidebar_position: 65
sidebar_label: 'format'
title: 'format'
doc_type: 'reference'
format Table Function
Parses data from arguments according to specified input format. If structure argument is not specified, it's extracted from the data.
Syntax {#syntax}
sql
format(format_name, [structure], data)
Arguments {#arguments}
format_name
β The
format
of the data.
structure
- Structure of the table. Optional. Format 'column1_name column1_type, column2_name column2_type, ...'.
data
β String literal or constant expression that returns a string containing data in specified format
Returned value {#returned_value}
A table with data parsed from
data
argument according to specified format and specified or extracted structure.
Examples {#examples}
Without
structure
argument:
Query:
sql
SELECT * FROM format(JSONEachRow,
$$
{"a": "Hello", "b": 111}
{"a": "World", "b": 123}
{"a": "Hello", "b": 112}
{"a": "World", "b": 124}
$$)
Result:
response
ββββbββ¬βaββββββ
β 111 β Hello β
β 123 β World β
β 112 β Hello β
β 124 β World β
βββββββ΄ββββββββ
Query:
sql
DESC format(JSONEachRow,
$$
{"a": "Hello", "b": 111}
{"a": "World", "b": 123}
{"a": "Hello", "b": 112}
{"a": "World", "b": 124}
$$)
Result:
response
ββnameββ¬βtypeβββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β b β Nullable(Float64) β β β β β β
β a β Nullable(String) β β β β β β
ββββββββ΄ββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
With
structure
argument:
Query:
sql
SELECT * FROM format(JSONEachRow, 'a String, b UInt32',
$$
{"a": "Hello", "b": 111}
{"a": "World", "b": 123}
{"a": "Hello", "b": 112}
{"a": "World", "b": 124}
$$)
Result:
response
ββaββββββ¬βββbββ
β Hello β 111 β
β World β 123 β
β Hello β 112 β
β World β 124 β
βββββββββ΄ββββββ
Related {#related}
Formats | {"source_file": "format.md"} | [
0.005925033241510391,
0.05265098810195923,
-0.044982947409152985,
0.0776585191488266,
-0.06288275867700577,
0.003635789966210723,
0.022346841171383858,
0.05492869392037392,
-0.032104894518852234,
-0.029432348906993866,
-0.014722191728651524,
-0.03150353580713272,
0.018465150147676468,
-0.0... |
e7b4a618-339c-4e9e-9c6a-32ebe19d078a | description: 'An extension to the s3 table function, which allows processing files
from Amazon S3 and Google Cloud Storage in parallel with many nodes in a specified
cluster.'
sidebar_label: 's3Cluster'
sidebar_position: 181
slug: /sql-reference/table-functions/s3Cluster
title: 's3Cluster'
doc_type: 'reference'
s3Cluster Table Function
This is an extension to the
s3
table function.
Allows processing files from
Amazon S3
and Google Cloud Storage
Google Cloud Storage
in parallel with many nodes in a specified cluster. On initiator it creates a connection to all nodes in the cluster, discloses asterisks in S3 file path, and dispatches each file dynamically. On the worker node it asks the initiator about the next task to process and processes it. This is repeated until all tasks are finished.
Syntax {#syntax}
sql
s3Cluster(cluster_name, url[, NOSIGN | access_key_id, secret_access_key,[session_token]][, format][, structure][, compression_method][, headers][, extra_credentials])
s3Cluster(cluster_name, named_collection[, option=value [,..]])
Arguments {#arguments} | {"source_file": "s3Cluster.md"} | [
-0.06356830149888992,
-0.06931793689727783,
-0.11050725728273392,
0.008510052226483822,
0.04198533669114113,
-0.05180888622999191,
-0.010269619524478912,
-0.04105900228023529,
0.011338308453559875,
0.047521430999040604,
0.013192348182201385,
-0.024439454078674316,
0.09272747486829758,
-0.1... |
c4818367-98d2-4e02-9b0e-075e0b67cf58 | Arguments {#arguments}
| Argument | Description |
|---------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
cluster_name
| Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers. |
|
url
| path to a file or a bunch of files. Supports following wildcards in readonly mode:
*
,
**
,
?
,
{'abc','def'}
and
{N..M}
where
N
,
M
β numbers,
abc
,
def
β strings. For more information see
Wildcards In Path
. |
|
NOSIGN
| If this keyword is provided in place of credentials, all the requests will not be signed. |
|
access_key_id
and
secret_access_key
| Keys that specify credentials to use with given endpoint. Optional. |
|
session_token
| Session token to use with the given keys. Optional when passing keys. |
|
format
| The
format
of the file. |
|
structure
| Structure of the table. Format
'column1_name column1_type, column2_name column2_type, ...'
. |
|
compression_method
| Parameter is optional. Supported values:
none
,
gzip
or
gz
,
brotli
or
br
,
xz
or
LZMA
,
zstd
or
zst
. By default, it will autodetect compression method by file extension. |
|
headers
| Parameter is optional. Allows headers to be passed in the S3 request. Pass in the format
headers(key=value)
e.g.
headers('x-amz-request-payer' = 'requester')
. See
here
for example of use. |
|
extra_credentials
| Optional.
roleARN
can be passed via this parameter. See
here
for an example. | | {"source_file": "s3Cluster.md"} | [
0.034412991255521774,
0.08291902393102646,
-0.05204023793339729,
-0.02858797274529934,
-0.08600813895463943,
-0.0002526851021684706,
0.04146214574575424,
0.04908876121044159,
0.0498221181333065,
-0.05376224219799042,
0.00317861489020288,
-0.04168727248907089,
0.008608034811913967,
-0.01578... |
09136114-3568-450f-93ad-26c7b2ec949a | Arguments can also be passed using
named collections
. In this case
url
,
access_key_id
,
secret_access_key
,
format
,
structure
,
compression_method
work in the same way, and some extra parameters are supported:
| Argument | Description |
|--------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
filename
| appended to the url if specified. |
|
use_environment_credentials
| enabled by default, allows passing extra parameters using environment variables
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
,
AWS_CONTAINER_CREDENTIALS_FULL_URI
,
AWS_CONTAINER_AUTHORIZATION_TOKEN
,
AWS_EC2_METADATA_DISABLED
. |
|
no_sign_request
| disabled by default. |
|
expiration_window_seconds
| default value is 120. |
Returned value {#returned_value}
A table with the specified structure for reading or writing data in the specified file.
Examples {#examples}
Select the data from all the files in the
/root/data/clickhouse
and
/root/data/database/
folders, using all the nodes in the
cluster_simple
cluster:
sql
SELECT * FROM s3Cluster(
'cluster_simple',
'http://minio1:9001/root/data/{clickhouse,database}/*',
'minio',
'ClickHouse_Minio_P@ssw0rd',
'CSV',
'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))'
) ORDER BY (name, value, polygon);
Count the total amount of rows in all files in the cluster
cluster_simple
:
:::tip
If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use
?
.
:::
For production use cases, it is recommended to use
named collections
. Here is the example:
```sql | {"source_file": "s3Cluster.md"} | [
-0.06688357144594193,
0.11275019496679306,
-0.13217231631278992,
0.031785715371370316,
-0.05982647463679314,
-0.05883067846298218,
-0.005744225345551968,
-0.0014767739921808243,
0.03336340934038162,
-0.03251016139984131,
-0.025594092905521393,
-0.015433917753398418,
0.07178713381290436,
-0... |
cf97760f-6c44-488b-9317-7def1bcd20b4 | For production use cases, it is recommended to use
named collections
. Here is the example:
```sql
CREATE NAMED COLLECTION creds AS
access_key_id = 'minio',
secret_access_key = 'ClickHouse_Minio_P@ssw0rd';
SELECT count(*) FROM s3Cluster(
'cluster_simple', creds, url='https://s3-object-url.csv',
format='CSV', structure='name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))'
)
```
Accessing private and public buckets {#accessing-private-and-public-buckets}
Users can use the same approaches as document for the s3 function
here
.
Optimizing performance {#optimizing-performance}
For details on optimizing the performance of the s3 function see
our detailed guide
.
Related {#related}
S3 engine
s3 table function | {"source_file": "s3Cluster.md"} | [
0.0005471984623000026,
0.002065311186015606,
-0.1568545550107956,
0.03913789615035057,
-0.016123581677675247,
-0.008711384609341621,
-0.03740736469626427,
0.012425484135746956,
0.044271837919950485,
0.017282936722040176,
0.02853119745850563,
-0.06061001867055893,
0.12369602918624878,
-0.10... |
1886a30d-3dad-4af5-a4ea-204fee1f21ae | description: 'Allows to perform queries on data stored in a SQLite database.'
sidebar_label: 'sqlite'
sidebar_position: 185
slug: /sql-reference/table-functions/sqlite
title: 'sqlite'
doc_type: 'reference'
sqlite Table Function
Allows to perform queries on data stored in a
SQLite
database.
Syntax {#syntax}
sql
sqlite('db_path', 'table_name')
Arguments {#arguments}
db_path
β Path to a file with an SQLite database.
String
.
table_name
β Name of a table in the SQLite database.
String
.
Returned value {#returned_value}
A table object with the same columns as in the original
SQLite
table.
Example {#example}
Query:
sql
SELECT * FROM sqlite('sqlite.db', 'table1') ORDER BY col2;
Result:
text
ββcol1βββ¬βcol2ββ
β line1 β 1 β
β line2 β 2 β
β line3 β 3 β
βββββββββ΄βββββββ
Related {#related}
SQLite
table engine | {"source_file": "sqlite.md"} | [
-0.042461097240448,
0.003060826798900962,
-0.030918920412659645,
0.0684465765953064,
-0.050519395619630814,
-0.014847761020064354,
0.035615213215351105,
0.069008007645607,
-0.08412987738847733,
0.0030378997325897217,
0.03156149387359619,
0.038541752845048904,
0.019039370119571686,
-0.11541... |
6182f103-c5a1-47df-ad44-7e74dd44ff1a | description: 'This is an extension to the deltaLake table function.'
sidebar_label: 'deltaLakeCluster'
sidebar_position: 46
slug: /sql-reference/table-functions/deltalakeCluster
title: 'deltaLakeCluster'
doc_type: 'reference'
deltaLakeCluster Table Function
This is an extension to the
deltaLake
table function.
Allows processing files from
Delta Lake
tables in Amazon S3 in parallel from many nodes in a specified cluster. On initiator it creates a connection to all nodes in the cluster and dispatches each file dynamically. On the worker node it asks the initiator about the next task to process and processes it. This is repeated until all tasks are finished.
Syntax {#syntax}
```sql
deltaLakeCluster(cluster_name, url [,aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression])
deltaLakeCluster(cluster_name, named_collection[, option=value [,..]])
deltaLakeS3Cluster(cluster_name, url [,aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression])
deltaLakeS3Cluster(cluster_name, named_collection[, option=value [,..]])
deltaLakeAzureCluster(cluster_name, connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method])
deltaLakeAzureCluster(cluster_name, named_collection[, option=value [,..]])
``
deltaLakeS3Cluster
is an alias to
deltaLakeCluster`, both are for S3.
Arguments {#arguments}
cluster_name
β Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers.
Description of all other arguments coincides with description of arguments in equivalent
deltaLake
table function.
Returned value {#returned_value}
A table with the specified structure for reading data from cluster in the specified Delta Lake table in S3.
Virtual Columns {#virtual-columns}
_path
β Path to the file. Type:
LowCardinality(String)
.
_file
β Name of the file. Type:
LowCardinality(String)
.
_size
β Size of the file in bytes. Type:
Nullable(UInt64)
. If the file size is unknown, the value is
NULL
.
_time
β Last modified time of the file. Type:
Nullable(DateTime)
. If the time is unknown, the value is
NULL
.
_etag
β The etag of the file. Type:
LowCardinality(String)
. If the etag is unknown, the value is
NULL
.
Related {#related}
deltaLake engine
deltaLake table function | {"source_file": "deltalakeCluster.md"} | [
-0.052118536084890366,
-0.04726121574640274,
-0.05173155665397644,
0.01717623509466648,
0.020578252151608467,
-0.06084731966257095,
-0.0076533895917236805,
-0.01697973720729351,
-0.03331710770726204,
0.031194008886814117,
0.011477448977530003,
-0.04519246518611908,
0.049194712191820145,
-0... |
23c84739-6777-4146-a840-c0b83583ac2b | slug: /sql-reference/table-functions/numbers
sidebar_position: 145
sidebar_label: 'numbers'
title: 'numbers'
description: 'Returns tables with a single
number
column that contains specifiable integers.'
doc_type: 'reference'
numbers Table Function
numbers(N)
β Returns a table with the single 'number' column (UInt64) that contains integers from 0 to N-1.
numbers(N, M)
- Returns a table with the single 'number' column (UInt64) that contains integers from N to (N + M - 1).
numbers(N, M, S)
- Returns a table with the single 'number' column (UInt64) that contains integers from N to (N + M - 1) with step S.
Similar to the
system.numbers
table, it can be used for testing and generating successive values,
numbers(N, M)
more efficient than
system.numbers
.
The following queries are equivalent:
sql
SELECT * FROM numbers(10);
SELECT * FROM numbers(0, 10);
SELECT * FROM system.numbers LIMIT 10;
SELECT * FROM system.numbers WHERE number BETWEEN 0 AND 9;
SELECT * FROM system.numbers WHERE number IN (0, 1, 2, 3, 4, 5, 6, 7, 8, 9);
And the following queries are equivalent:
sql
SELECT number * 2 FROM numbers(10);
SELECT (number - 10) * 2 FROM numbers(10, 10);
SELECT * FROM numbers(0, 20, 2);
Examples:
sql
-- Generate a sequence of dates from 2010-01-01 to 2010-12-31
SELECT toDate('2010-01-01') + number AS d FROM numbers(365); | {"source_file": "numbers.md"} | [
0.02016141451895237,
0.026418182998895645,
-0.08137988299131393,
-0.011285861022770405,
-0.0579838789999485,
0.00968625582754612,
0.048681095242500305,
0.06123625114560127,
-0.026072269305586815,
0.011828216724097729,
0.0007757946150377393,
0.04277675226330757,
0.08587189018726349,
-0.1028... |
0aae7f54-827f-4299-a0e6-58e82cdcc14c | description: 'Creates a temporary table of the specified structure with the Null table
engine. The function is used for the convenience of test writing and demonstrations.'
sidebar_label: 'null function'
sidebar_position: 140
slug: /sql-reference/table-functions/null
title: 'null'
doc_type: 'reference'
null Table Function
Creates a temporary table of the specified structure with the
Null
table engine. According to the
Null
-engine properties, the table data is ignored and the table itself is immediately dropped right after the query execution. The function is used for the convenience of test writing and demonstrations.
Syntax {#syntax}
sql
null('structure')
Argument {#argument}
structure
β A list of columns and column types.
String
.
Returned value {#returned_value}
A temporary
Null
-engine table with the specified structure.
Example {#example}
Query with the
null
function:
sql
INSERT INTO function null('x UInt64') SELECT * FROM numbers_mt(1000000000);
can replace three queries:
sql
CREATE TABLE t (x UInt64) ENGINE = Null;
INSERT INTO t SELECT * FROM numbers_mt(1000000000);
DROP TABLE IF EXISTS t;
Related {#related}
Null table engine | {"source_file": "null.md"} | [
-0.039532359689474106,
0.0563693568110466,
-0.06429790705442429,
0.07877881824970245,
-0.04622722044587135,
-0.03843285143375397,
0.010532945394515991,
0.058684080839157104,
-0.018069906160235405,
0.024482788518071175,
0.09250746667385101,
-0.04636533185839653,
0.06404415518045425,
-0.1199... |
7df9f0c8-8d3a-40ac-b4f0-07973f9bd643 | description: 'Table function
remote
allows to access remote servers on-the-fly,
i.e. without creating a distributed table. Table function
remoteSecure
is same
as
remote
but over a secure connection.'
sidebar_label: 'remote'
sidebar_position: 175
slug: /sql-reference/table-functions/remote
title: 'remote, remoteSecure'
doc_type: 'reference'
remote, remoteSecure Table Function
Table function
remote
allows to access remote servers on-the-fly, i.e. without creating a
Distributed
table. Table function
remoteSecure
is same as
remote
but over a secure connection.
Both functions can be used in
SELECT
and
INSERT
queries.
Syntax {#syntax}
sql
remote(addresses_expr, [db, table, user [, password], sharding_key])
remote(addresses_expr, [db.table, user [, password], sharding_key])
remote(named_collection[, option=value [,..]])
remoteSecure(addresses_expr, [db, table, user [, password], sharding_key])
remoteSecure(addresses_expr, [db.table, user [, password], sharding_key])
remoteSecure(named_collection[, option=value [,..]])
Parameters {#parameters} | {"source_file": "remote.md"} | [
0.0257264394313097,
-0.015886208042502403,
-0.060461703687906265,
0.0508449487388134,
-0.08531665802001953,
-0.0205149557441473,
0.013044671155512333,
0.030187496915459633,
-0.021929526701569557,
0.04385719075798988,
0.04895564541220665,
-0.005154758226126432,
0.12599888443946838,
-0.05497... |
620064bd-a3ad-4d97-b8d6-1e9e5693101a | | Argument | Description |
|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
addresses_expr
| A remote server address or an expression that generates multiple addresses of remote servers. Format:
host
or
host:port
.
The
host
can be specified as a server name, or as a IPv4 or IPv6 address. An IPv6 address must be specified in square brackets.
The
port
is the TCP port on the remote server. If the port is omitted, it uses
tcp_port
from the server config file for table function
remote
(by default, 9000) and
tcp_port_secure
for table function
remoteSecure
(by default, 9440).
For IPv6 addresses, a port is required.
If only parameter
addresses_expr
is specified,
db
and
table
will use
system.one
by default.
Type:
String
. |
|
db
| Database name. Type:
String
. |
|
table
| Table name. Type:
String
. |
|
user
| User name. If not specified,
default
is used. Type:
String
. |
|
password
| User password. If not specified, an empty password is used. Type:
String
. |
|
sharding_key
| Sharding key to support distributing data across nodes. For example:
insert into remote('127.0.0.1:9000,127.0.0.2', db, table, 'default', rand())
. Type:
UInt32 | {"source_file": "remote.md"} | [
0.02788747288286686,
0.09290937334299088,
-0.03714551776647568,
-0.03443417325615883,
-0.05947873741388321,
0.0183249581605196,
0.03254047408699989,
0.04820331931114197,
0.04512941464781761,
-0.05685262009501457,
0.012245319783687592,
-0.044619567692279816,
0.00027788375155068934,
-0.03486... |
9d5a214e-f7aa-49ca-9e65-bac48c4d804b | |
sharding_key
| Sharding key to support distributing data across nodes. For example:
insert into remote('127.0.0.1:9000,127.0.0.2', db, table, 'default', rand())
. Type:
UInt32
. | | {"source_file": "remote.md"} | [
0.0651077851653099,
-0.027385547757148743,
-0.11012156307697296,
0.04056470841169357,
-0.0760841816663742,
-0.04929322376847267,
-0.03172799572348595,
-0.024417459964752197,
-0.04907270893454552,
0.02890687808394432,
0.04147571325302124,
-0.04045597091317177,
0.11743267625570297,
-0.050231... |
6db3507d-808d-4124-bc74-a3851f47eb67 | Arguments also can be passed using
named collections
.
Returned value {#returned-value}
A table located on a remote server.
Usage {#usage}
As table functions
remote
and
remoteSecure
re-establish the connection for each request, it is recommended to use a
Distributed
table instead. Also, if hostnames are set, the names are resolved, and errors are not counted when working with various replicas. When processing a large number of queries, always create the
Distributed
table ahead of time, and do not use the
remote
table function.
The
remote
table function can be useful in the following cases:
One-time data migration from one system to another
Accessing a specific server for data comparison, debugging, and testing, i.e. ad-hoc connections.
Queries between various ClickHouse clusters for research purposes.
Infrequent distributed requests that are made manually.
Distributed requests where the set of servers is re-defined each time.
Addresses {#addresses}
```text
example01-01-1
example01-01-1:9440
example01-01-1:9000
localhost
127.0.0.1
```
Multiple addresses can be comma-separated. In this case, ClickHouse will use distributed processing and send the query to all specified addresses (like shards with different data). Example:
text
example01-01-1,example01-02-1
Examples {#examples}
Selecting data from a remote server: {#selecting-data-from-a-remote-server}
sql
SELECT * FROM remote('127.0.0.1', db.remote_engine_table) LIMIT 3;
Or using
named collections
:
sql
CREATE NAMED COLLECTION creds AS
host = '127.0.0.1',
database = 'db';
SELECT * FROM remote(creds, table='remote_engine_table') LIMIT 3;
Inserting data into a table on a remote server: {#inserting-data-into-a-table-on-a-remote-server}
sql
CREATE TABLE remote_table (name String, value UInt32) ENGINE=Memory;
INSERT INTO FUNCTION remote('127.0.0.1', currentDatabase(), 'remote_table') VALUES ('test', 42);
SELECT * FROM remote_table;
Migration of tables from one system to another: {#migration-of-tables-from-one-system-to-another}
This example uses one table from a sample dataset. The database is
imdb
, and the table is
actors
.
On the source ClickHouse system (the system that currently hosts the data) {#on-the-source-clickhouse-system-the-system-that-currently-hosts-the-data}
Verify the source database and table name (
imdb.actors
)
sql
show databases
sql
show tables in imdb
Get the CREATE TABLE statement from the source:
sql
SELECT create_table_query
FROM system.tables
WHERE database = 'imdb' AND table = 'actors'
Response
sql
CREATE TABLE imdb.actors (`id` UInt32,
`first_name` String,
`last_name` String,
`gender` FixedString(1))
ENGINE = MergeTree
ORDER BY (id, first_name, last_name, gender); | {"source_file": "remote.md"} | [
-0.038778990507125854,
0.0028171574231237173,
-0.08482664078474045,
0.049878522753715515,
-0.057878799736499786,
-0.13266758620738983,
-0.009915422648191452,
-0.022970780730247498,
0.03599393740296364,
0.058989230543375015,
0.0006954885320737958,
-0.003704777918756008,
0.08906065672636032,
... |
f8d7bb81-5110-4238-b63d-90db075850d1 | On the destination ClickHouse system {#on-the-destination-clickhouse-system}
Create the destination database:
sql
CREATE DATABASE imdb
Using the CREATE TABLE statement from the source, create the destination:
sql
CREATE TABLE imdb.actors (`id` UInt32,
`first_name` String,
`last_name` String,
`gender` FixedString(1))
ENGINE = MergeTree
ORDER BY (id, first_name, last_name, gender);
Back on the source deployment {#back-on-the-source-deployment}
Insert into the new database and table created on the remote system. You will need the host, port, username, password, destination database, and destination table.
sql
INSERT INTO FUNCTION
remoteSecure('remote.clickhouse.cloud:9440', 'imdb.actors', 'USER', 'PASSWORD')
SELECT * from imdb.actors
Globbing {#globs-in-addresses}
Patterns in curly brackets
{ }
are used to generate a set of shards and to specify replicas. If there are multiple pairs of curly brackets, then the direct product of the corresponding sets is generated.
The following pattern types are supported.
{a,b,c}
- Represents any of alternative strings
a
,
b
or
c
. The pattern is replaced with
a
in the first shard address and replaced with
b
in the second shard address and so on. For instance,
example0{1,2}-1
generates addresses
example01-1
and
example02-1
.
{N..M}
- A range of numbers. This pattern generates shard addresses with incrementing indices from
N
to (and including)
M
. For instance,
example0{1..2}-1
generates
example01-1
and
example02-1
.
{0n..0m}
- A range of numbers with leading zeroes. This pattern preserves leading zeroes in indices. For instance,
example{01..03}-1
generates
example01-1
,
example02-1
and
example03-1
.
{a|b}
- Any number of variants separated by a
|
. The pattern specifies replicas. For instance,
example01-{1|2}
generates replicas
example01-1
and
example01-2
.
The query will be sent to the first healthy replica. However, for
remote
the replicas are iterated in the order currently set in the
load_balancing
setting.
The number of generated addresses is limited by
table_function_remote_max_addresses
setting. | {"source_file": "remote.md"} | [
0.044025395065546036,
-0.12593358755111694,
-0.05062158778309822,
-0.0126033341512084,
-0.061699796468019485,
-0.05373286455869675,
0.048739805817604065,
-0.023332780227065086,
-0.018307240679860115,
0.077776238322258,
0.043677520006895065,
-0.042702075093984604,
0.14701028168201447,
-0.02... |
07482516-88a8-446e-82f5-32f7c4e1ea36 | description: 'Evaluates a prometheus query using data from a TimeSeries table.'
sidebar_label: 'prometheusQuery'
sidebar_position: 145
slug: /sql-reference/table-functions/prometheusQuery
title: 'prometheusQuery'
doc_type: 'reference'
prometheusQuery Table Function
Evaluates a prometheus query using data from a TimeSeries table.
Syntax {#syntax}
sql
prometheusQuery('db_name', 'time_series_table', 'promql_query', evaluation_time)
prometheusQuery(db_name.time_series_table, 'promql_query', evaluation_time)
prometheusQuery('time_series_table', 'promql_query', evaluation_time)
Arguments {#arguments}
db_name
- The name of the database where a TimeSeries table is located.
time_series_table
- The name of a TimeSeries table.
promql_query
- A query written in
PromQL syntax
.
evaluation_time - The evaluation timestamp. To evaluate a query at the current time, use
now()
as
evaluation_time`.
Returned value {#returned_value}
The function can returns different columns depending on the result type of the query passed to parameter
promql_query
:
| Result Type | Result Columns | Example |
|-------------|----------------|---------|
| vector | tags Array(Tuple(String, String)), timestamp TimestampType, value ValueType | prometheusQuery(mytable, 'up') |
| matrix | tags Array(Tuple(String, String)), time_series Array(Tuple(TimestampType, ValueType)) | prometheusQuery(mytable, 'up[1m]') |
| scalar | scalar ValueType | prometheusQuery(mytable, '1h30m') |
| string | string String | prometheusQuery(mytable, '"abc"') |
Example {#example}
sql
SELECT * FROM prometheusQuery(mytable, 'rate(http_requests{job="prometheus"}[10m])[1h:10m]', now()) | {"source_file": "prometheusQuery.md"} | [
-0.023672856390476227,
0.05138961225748062,
-0.07354822754859924,
0.043966468423604965,
-0.055272649973630905,
-0.07564765214920044,
0.05111004412174225,
0.0671299546957016,
-0.0305333249270916,
0.005715205799788237,
-0.020193593576550484,
-0.06953723728656769,
0.03552728146314621,
-0.0400... |
5f4a977a-8e71-4fdc-8740-dc8ab8f5611b | description: 'Allows
SELECT
queries to be performed on data that is stored on a
remote MongoDB server.'
sidebar_label: 'mongodb'
sidebar_position: 135
slug: /sql-reference/table-functions/mongodb
title: 'mongodb'
doc_type: 'reference'
mongodb Table Function
Allows
SELECT
queries to be performed on data that is stored on a remote MongoDB server.
Syntax {#syntax}
sql
mongodb(host:port, database, collection, user, password, structure[, options[, oid_columns]])
Arguments {#arguments}
| Argument | Description |
|---------------|--------------------------------------------------------------------------------------------------------|
|
host:port
| MongoDB server address. |
|
database
| Remote database name. |
|
collection
| Remote collection name. |
|
user
| MongoDB user. |
|
password
| User password. |
|
structure
| The schema for the ClickHouse table returned from this function. |
|
options
| MongoDB connection string options (optional parameter). |
|
oid_columns
| Comma-separated list of columns that should be treated as
oid
in the WHERE clause.
_id
by default. |
:::tip
If you are using the MongoDB Atlas cloud offering please add these options:
ini
'connectTimeoutMS=10000&ssl=true&authSource=admin'
:::
You can also connect by URI:
sql
mongodb(uri, collection, structure[, oid_columns])
| Argument | Description |
|---------------|--------------------------------------------------------------------------------------------------------|
|
uri
| Connection string. |
|
collection
| Remote collection name. |
|
structure
| The schema for the ClickHouse table returned from this function. |
|
oid_columns
| Comma-separated list of columns that should be treated as
oid
in the WHERE clause.
_id
by default. |
Returned value {#returned_value}
A table object with the same columns as the original MongoDB table.
Examples {#examples}
Suppose we have a collection named
my_collection
defined in a MongoDB database named
test
, and we insert a couple of documents: | {"source_file": "mongodb.md"} | [
0.02347419410943985,
0.05342671647667885,
-0.038090210407972336,
0.09063912183046341,
-0.03230495750904083,
-0.04341159388422966,
-0.008947181515395641,
0.04317895323038101,
0.029825683683156967,
0.010735634714365005,
-0.03518659248948097,
-0.05814861133694649,
-0.016494987532496452,
-0.05... |
1d865997-cb53-4de6-8aac-5cbe205edbbd | Examples {#examples}
Suppose we have a collection named
my_collection
defined in a MongoDB database named
test
, and we insert a couple of documents:
```sql
db.createUser({user:"test_user",pwd:"password",roles:[{role:"readWrite",db:"test"}]})
db.createCollection("my_collection")
db.my_collection.insertOne(
{ log_type: "event", host: "120.5.33.9", command: "check-cpu-usage -w 75 -c 90" }
)
db.my_collection.insertOne(
{ log_type: "event", host: "120.5.33.4", command: "system-check"}
)
```
Let's query the collection using the
mongodb
table function:
sql
SELECT * FROM mongodb(
'127.0.0.1:27017',
'test',
'my_collection',
'test_user',
'password',
'log_type String, host String, command String',
'connectTimeoutMS=10000'
)
or:
sql
SELECT * FROM mongodb(
'mongodb://test_user:password@127.0.0.1:27017/test?connectionTimeoutMS=10000',
'my_collection',
'log_type String, host String, command String'
)
Related {#related}
The
MongoDB
table engine
Using MongoDB as a dictionary source | {"source_file": "mongodb.md"} | [
0.004479300696402788,
0.014104129746556282,
-0.05697503685951233,
0.12935613095760345,
-0.050558608025312424,
-0.09928533434867859,
0.013094413094222546,
0.04611710086464882,
0.08814587444067001,
0.01570088043808937,
0.03053445741534233,
-0.050967685878276825,
0.030470872297883034,
-0.0313... |
c3c1a9f4-4827-4ecc-8ee4-fc63309f7aef | description: 'Documentation for the Expression special data type'
sidebar_label: 'Expression'
sidebar_position: 58
slug: /sql-reference/data-types/special-data-types/expression
title: 'Expression'
doc_type: 'reference'
Expression
Expressions are used for representing lambdas in high-order functions. | {"source_file": "expression.md"} | [
-0.06970812380313873,
0.04522060975432396,
0.019556740298867226,
0.035592589527368546,
-0.02001969702541828,
0.06171613559126854,
0.035262566059827805,
0.05574173107743263,
0.0009711345192044973,
0.03567887470126152,
0.024196185171604156,
-0.01482643187046051,
0.002764973556622863,
0.00985... |
df11b833-64ae-4e38-bd3a-a70a485cffe1 | description: 'Documentation for the Nothing special data type'
sidebar_label: 'Nothing'
sidebar_position: 60
slug: /sql-reference/data-types/special-data-types/nothing
title: 'Nothing'
doc_type: 'reference'
Nothing
The only purpose of this data type is to represent cases where a value is not expected. So you can't create a
Nothing
type value.
For example, literal
NULL
has type of
Nullable(Nothing)
. See more about
Nullable
.
The
Nothing
type can also used to denote empty arrays:
sql
SELECT toTypeName(array())
text
ββtoTypeName(array())ββ
β Array(Nothing) β
βββββββββββββββββββββββ | {"source_file": "nothing.md"} | [
0.034019459038972855,
0.033661577850580215,
-0.0631960928440094,
0.0922873467206955,
-0.06938908994197845,
0.010374434292316437,
0.06372497230768204,
-0.02663678117096424,
-0.008855901658535004,
-0.042238540947437286,
0.06915131956338882,
-0.04468797892332077,
0.040098413825035095,
-0.0403... |
21cc3a56-e925-4f4b-88c7-55f823def991 | description: 'Documentation for the Set special data type used in IN expressions'
sidebar_label: 'Set'
sidebar_position: 59
slug: /sql-reference/data-types/special-data-types/set
title: 'Set'
doc_type: 'reference'
Set
Used for the right half of an
IN
expression. | {"source_file": "set.md"} | [
-0.03960660472512245,
0.0699777752161026,
-0.0023732201661914587,
0.06996980309486389,
-0.06598174571990967,
0.0745178610086441,
0.06505781412124634,
0.08530683070421219,
-0.05582568421959877,
0.024214770644903183,
0.010542684234678745,
-0.024078965187072754,
0.04668152704834938,
-0.070084... |
6e06e75d-6998-43b1-8134-e5efb72453cf | description: 'Documentation for the Interval special data type'
sidebar_label: 'Interval'
sidebar_position: 61
slug: /sql-reference/data-types/special-data-types/interval
title: 'Interval'
doc_type: 'reference'
Interval
The family of data types representing time and date intervals. The resulting types of the
INTERVAL
operator.
Structure:
Time interval as an unsigned integer value.
Type of an interval.
Supported interval types:
NANOSECOND
MICROSECOND
MILLISECOND
SECOND
MINUTE
HOUR
DAY
WEEK
MONTH
QUARTER
YEAR
For each interval type, there is a separate data type. For example, the
DAY
interval corresponds to the
IntervalDay
data type:
sql
SELECT toTypeName(INTERVAL 4 DAY)
text
ββtoTypeName(toIntervalDay(4))ββ
β IntervalDay β
ββββββββββββββββββββββββββββββββ
Usage Remarks {#usage-remarks}
You can use
Interval
-type values in arithmetical operations with
Date
and
DateTime
-type values. For example, you can add 4 days to the current time:
sql
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY
text
ββββcurrent_date_timeββ¬βplus(now(), toIntervalDay(4))ββ
β 2019-10-23 10:58:45 β 2019-10-27 10:58:45 β
βββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββ
Also it is possible to use multiple intervals simultaneously:
sql
SELECT now() AS current_date_time, current_date_time + (INTERVAL 4 DAY + INTERVAL 3 HOUR)
text
ββββcurrent_date_timeββ¬βplus(current_date_time, plus(toIntervalDay(4), toIntervalHour(3)))ββ
β 2024-08-08 18:31:39 β 2024-08-12 21:31:39 β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
And to compare values with different intervals:
sql
SELECT toIntervalMicrosecond(3600000000) = toIntervalHour(1);
text
ββless(toIntervalMicrosecond(179999999), toIntervalMinute(3))ββ
β 1 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also {#see-also}
INTERVAL
operator
toInterval
type conversion functions | {"source_file": "interval.md"} | [
0.004078092519193888,
0.02252046763896942,
0.04980449751019478,
0.056826017796993256,
-0.12420663982629776,
0.01775984838604927,
0.02756928652524948,
0.023299338296055794,
-0.045712217688560486,
-0.05234106630086899,
0.026113055646419525,
-0.07077505439519882,
0.023711223155260086,
0.00757... |
081d39b4-4ac2-4ee6-ac19-f12a026290f7 | description: 'Overview of special data types in ClickHouse that are used for intermediate
results during query execution'
sidebar_label: 'Special Data Types'
sidebar_position: 55
slug: /sql-reference/data-types/special-data-types/
title: 'Special Data Types'
doc_type: 'reference'
Special data types
Special data type values can't be serialized for saving in a table or output in query results, but can be used as an intermediate result during query execution. | {"source_file": "index.md"} | [
0.025821011513471603,
0.015313525684177876,
-0.06903602182865143,
0.07906704396009445,
-0.0903855487704277,
0.044304680079221725,
-0.000013979644791106693,
0.027220048010349274,
-0.09721255302429199,
-0.017748428508639336,
0.038245391100645065,
0.01648355834186077,
0.03680555522441864,
-0.... |
7121ad1e-8b08-4da2-8d3c-4799b795f888 | description: 'Overview of domain types in ClickHouse, which extend base types with
additional features'
sidebar_label: 'Domains'
sidebar_position: 56
slug: /sql-reference/data-types/domains/
title: 'Domains'
doc_type: 'reference'
Domains
Domains are special-purpose types that add extra features on top of existing base types, while leaving the on-wire and on-disk format of the underlying data type intact. Currently, ClickHouse does not support user-defined domains.
You can use domains anywhere corresponding base type can be used, for example:
Create a column of a domain type
Read/write values from/to domain column
Use it as an index if a base type can be used as an index
Call functions with values of domain column
Extra Features of Domains {#extra-features-of-domains}
Explicit column type name in
SHOW CREATE TABLE
or
DESCRIBE TABLE
Input from human-friendly format with
INSERT INTO domain_table(domain_column) VALUES(...)
Output to human-friendly format for
SELECT domain_column FROM domain_table
Loading data from an external source in the human-friendly format:
INSERT INTO domain_table FORMAT CSV ...
Limitations {#limitations}
Can't convert index column of base type to domain type via
ALTER TABLE
.
Can't implicitly convert string values into domain values when inserting data from another column or table.
Domain adds no constrains on stored values. | {"source_file": "index.md"} | [
0.036702483892440796,
-0.10947483032941818,
-0.04652116447687149,
0.0265483520925045,
-0.05124308541417122,
-0.06230226159095764,
-0.009686417877674103,
-0.003799915313720703,
-0.0886056050658226,
-0.030496662482619286,
-0.04088617488741875,
-0.043118249624967575,
0.03771551698446274,
-0.0... |
2df28a0e-6e03-4036-bc9a-a1dcc65249f9 | description: 'Overview of nested data structures in ClickHouse'
sidebar_label: 'Nested(Name1 Type1, Name2 Type2, ...)'
sidebar_position: 57
slug: /sql-reference/data-types/nested-data-structures/nested
title: 'Nested'
doc_type: 'guide'
Nested
Nested(name1 Type1, Name2 Type2, ...) {#nestedname1-type1-name2-type2-}
A nested data structure is like a table inside a cell. The parameters of a nested data structure β the column names and types β are specified the same way as in a
CREATE TABLE
query. Each table row can correspond to any number of rows in a nested data structure.
Example:
sql
CREATE TABLE test.visits
(
CounterID UInt32,
StartDate Date,
Sign Int8,
IsNew UInt8,
VisitID UInt64,
UserID UInt64,
...
Goals Nested
(
ID UInt32,
Serial UInt32,
EventTime DateTime,
Price Int64,
OrderID String,
CurrencyID UInt32
),
...
) ENGINE = CollapsingMergeTree(StartDate, intHash32(UserID), (CounterID, StartDate, intHash32(UserID), VisitID), 8192, Sign)
This example declares the
Goals
nested data structure, which contains data about conversions (goals reached). Each row in the 'visits' table can correspond to zero or any number of conversions.
When
flatten_nested
is set to
0
(which is not by default), arbitrary levels of nesting are supported.
In most cases, when working with a nested data structure, its columns are specified with column names separated by a dot. These columns make up an array of matching types. All the column arrays of a single nested data structure have the same length.
Example:
sql
SELECT
Goals.ID,
Goals.EventTime
FROM test.visits
WHERE CounterID = 101500 AND length(Goals.ID) < 5
LIMIT 10 | {"source_file": "index.md"} | [
-0.023283494636416435,
0.004589820746332407,
-0.018842726945877075,
0.10631883144378662,
-0.05875248834490776,
-0.050961267203092575,
0.009539592079818249,
0.03776666522026062,
-0.010167622938752174,
0.01270398311316967,
0.03863997012376785,
-0.03830597177147865,
0.06811261177062988,
-0.04... |
32caed8b-d5d3-4843-bfae-55994682f1cb | Example:
sql
SELECT
Goals.ID,
Goals.EventTime
FROM test.visits
WHERE CounterID = 101500 AND length(Goals.ID) < 5
LIMIT 10
text
ββGoals.IDββββββββββββββββββββββββ¬βGoals.EventTimeββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β [1073752,591325,591325] β ['2014-03-17 16:38:10','2014-03-17 16:38:48','2014-03-17 16:42:27'] β
β [1073752] β ['2014-03-17 00:28:25'] β
β [1073752] β ['2014-03-17 10:46:20'] β
β [1073752,591325,591325,591325] β ['2014-03-17 13:59:20','2014-03-17 22:17:55','2014-03-17 22:18:07','2014-03-17 22:18:51'] β
β [] β [] β
β [1073752,591325,591325] β ['2014-03-17 11:37:06','2014-03-17 14:07:47','2014-03-17 14:36:21'] β
β [] β [] β
β [] β [] β
β [591325,1073752] β ['2014-03-17 00:46:05','2014-03-17 00:46:05'] β
β [1073752,591325,591325,591325] β ['2014-03-17 13:28:33','2014-03-17 13:30:26','2014-03-17 18:51:21','2014-03-17 18:51:45'] β
ββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
It is easiest to think of a nested data structure as a set of multiple column arrays of the same length.
The only place where a SELECT query can specify the name of an entire nested data structure instead of individual columns is the ARRAY JOIN clause. For more information, see "ARRAY JOIN clause". Example:
sql
SELECT
Goal.ID,
Goal.EventTime
FROM test.visits
ARRAY JOIN Goals AS Goal
WHERE CounterID = 101500 AND length(Goals.ID) < 5
LIMIT 10
text
ββGoal.IDββ¬ββββββGoal.EventTimeββ
β 1073752 β 2014-03-17 16:38:10 β
β 591325 β 2014-03-17 16:38:48 β
β 591325 β 2014-03-17 16:42:27 β
β 1073752 β 2014-03-17 00:28:25 β
β 1073752 β 2014-03-17 10:46:20 β
β 1073752 β 2014-03-17 13:59:20 β
β 591325 β 2014-03-17 22:17:55 β
β 591325 β 2014-03-17 22:18:07 β
β 591325 β 2014-03-17 22:18:51 β
β 1073752 β 2014-03-17 11:37:06 β
βββββββββββ΄ββββββββββββββββββββββ
You can't perform SELECT for an entire nested data structure. You can only explicitly list individual columns that are part of it.
For an INSERT query, you should pass all the component column arrays of a nested data structure separately (as if they were individual column arrays). During insertion, the system checks that they have the same length.
For a DESCRIBE query, the columns in a nested data structure are listed separately in the same way. | {"source_file": "index.md"} | [
0.033437661826610565,
-0.0014488971792161465,
-0.034823156893253326,
0.0631653293967247,
0.04394472762942314,
0.04803165793418884,
0.08061409741640091,
0.010598531924188137,
0.04501635581254959,
0.02385459467768669,
-0.020186183974146843,
-0.02640351466834545,
0.014988654293119907,
0.01581... |
ac7f4df8-ea2d-4a62-824c-6dc9431bf114 | For a DESCRIBE query, the columns in a nested data structure are listed separately in the same way.
The ALTER query for elements in a nested data structure has limitations. | {"source_file": "index.md"} | [
0.04248732328414917,
0.04383400082588196,
0.04578058421611786,
0.048349108546972275,
-0.008756727911531925,
-0.06193351000547409,
-0.05427926033735275,
0.0034533340949565172,
0.011883456259965897,
-0.006776632275432348,
0.028982749208807945,
-0.02652638405561447,
-0.00034676812356337905,
-... |
ad7ef894-66d0-44dd-898c-7f27cf35c7ea | description: 'Calculates the total length of union of all ranges (segments on numeric
axis).'
sidebar_label: 'intervalLengthSum'
sidebar_position: 155
slug: /sql-reference/aggregate-functions/reference/intervalLengthSum
title: 'intervalLengthSum'
doc_type: 'reference'
Calculates the total length of union of all ranges (segments on numeric axis).
Syntax
sql
intervalLengthSum(start, end)
Arguments
start
β The starting value of the interval.
Int32
,
Int64
,
UInt32
,
UInt64
,
Float32
,
Float64
,
DateTime
or
Date
.
end
β The ending value of the interval.
Int32
,
Int64
,
UInt32
,
UInt64
,
Float32
,
Float64
,
DateTime
or
Date
.
:::note
Arguments must be of the same data type. Otherwise, an exception will be thrown.
:::
Returned value
Total length of union of all ranges (segments on numeric axis). Depending on the type of the argument, the return value may be
UInt64
or
Float64
type.
Examples
Input table:
text
ββidββ¬βstartββ¬βendββ
β a β 1.1 β 2.9 β
β a β 2.5 β 3.2 β
β a β 4 β 5 β
ββββββ΄ββββββββ΄ββββββ
In this example, the arguments of the Float32 type are used. The function returns a value of the Float64 type.
Result is the sum of lengths of intervals
[1.1, 3.2]
(union of
[1.1, 2.9]
and
[2.5, 3.2]
) and
[4, 5]
Query:
sql
SELECT id, intervalLengthSum(start, end), toTypeName(intervalLengthSum(start, end)) FROM fl_interval GROUP BY id ORDER BY id;
Result:
text
ββidββ¬βintervalLengthSum(start, end)ββ¬βtoTypeName(intervalLengthSum(start, end))ββ
β a β 3.1 β Float64 β
ββββββ΄ββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββ
Input table:
text
ββidββ¬βββββββββββββββstartββ¬βββββββββββββββββendββ
β a β 2020-01-01 01:12:30 β 2020-01-01 02:10:10 β
β a β 2020-01-01 02:05:30 β 2020-01-01 02:50:31 β
β a β 2020-01-01 03:11:22 β 2020-01-01 03:23:31 β
ββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ
In this example, the arguments of the DateTime type are used. The function returns a value in seconds.
Query:
sql
SELECT id, intervalLengthSum(start, end), toTypeName(intervalLengthSum(start, end)) FROM dt_interval GROUP BY id ORDER BY id;
Result:
text
ββidββ¬βintervalLengthSum(start, end)ββ¬βtoTypeName(intervalLengthSum(start, end))ββ
β a β 6610 β UInt64 β
ββββββ΄ββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββ
Input table:
text
ββidββ¬ββββββstartββ¬ββββββββendββ
β a β 2020-01-01 β 2020-01-04 β
β a β 2020-01-12 β 2020-01-18 β
ββββββ΄βββββββββββββ΄βββββββββββββ
In this example, the arguments of the Date type are used. The function returns a value in days.
Query:
sql
SELECT id, intervalLengthSum(start, end), toTypeName(intervalLengthSum(start, end)) FROM date_interval GROUP BY id ORDER BY id;
Result: | {"source_file": "intervalLengthSum.md"} | [
0.024631790816783905,
0.0342339463531971,
-0.0022569780703634024,
-0.002920838538557291,
-0.06040264293551445,
0.041864585131406784,
-0.04057810828089714,
0.0956123024225235,
-0.036824312061071396,
-0.026543717831373215,
-0.004672105424106121,
-0.08051738142967224,
0.0486246794462204,
-0.0... |
4f7f7c8e-f29b-4491-9b5d-fb19d3734cae | Query:
sql
SELECT id, intervalLengthSum(start, end), toTypeName(intervalLengthSum(start, end)) FROM date_interval GROUP BY id ORDER BY id;
Result:
text
ββidββ¬βintervalLengthSum(start, end)ββ¬βtoTypeName(intervalLengthSum(start, end))ββ
β a β 9 β UInt64 β
ββββββ΄ββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "intervalLengthSum.md"} | [
0.03990021347999573,
0.020950157195329666,
0.06443136930465698,
0.044809624552726746,
-0.07402127236127853,
0.15571552515029907,
0.021854614838957787,
0.03721335530281067,
-0.0032740517053753138,
-0.05722811818122864,
0.03218596801161766,
0.00847569853067398,
0.0021627729292958975,
-0.0315... |
40d5f307-a2f5-4f96-bc4f-6841f9680089 | description: 'The
median*
functions are the aliases for the corresponding
quantile*
functions. They calculate median of a numeric data sample.'
sidebar_position: 167
slug: /sql-reference/aggregate-functions/reference/median
title: 'median'
doc_type: 'reference'
median
The
median*
functions are the aliases for the corresponding
quantile*
functions. They calculate median of a numeric data sample.
Functions:
median
β Alias for
quantile
.
medianDeterministic
β Alias for
quantileDeterministic
.
medianExact
β Alias for
quantileExact
.
medianExactWeighted
β Alias for
quantileExactWeighted
.
medianTiming
β Alias for
quantileTiming
.
medianTimingWeighted
β Alias for
quantileTimingWeighted
.
medianTDigest
β Alias for
quantileTDigest
.
medianTDigestWeighted
β Alias for
quantileTDigestWeighted
.
medianBFloat16
β Alias for
quantileBFloat16
.
medianDD
β Alias for
quantileDD
.
Example
Input table:
text
ββvalββ
β 1 β
β 1 β
β 2 β
β 3 β
βββββββ
Query:
sql
SELECT medianDeterministic(val, 1) FROM t;
Result:
text
ββmedianDeterministic(val, 1)ββ
β 1.5 β
βββββββββββββββββββββββββββββββ | {"source_file": "median.md"} | [
-0.03045462630689144,
-0.021268323063850403,
0.01066404115408659,
-0.0392138734459877,
-0.06142791733145714,
-0.06738334894180298,
0.009763042442500591,
0.12493564188480377,
-0.02855791710317135,
0.021100133657455444,
0.03511197492480278,
-0.07103076577186584,
0.03950827196240425,
-0.03873... |
9511c6ef-17c9-4362-be2e-c37ab0634059 | description: 'Applies Welch''s t-test to samples from two populations.'
sidebar_label: 'welchTTest'
sidebar_position: 214
slug: /sql-reference/aggregate-functions/reference/welchttest
title: 'welchTTest'
doc_type: 'reference'
welchTTest
Applies Welch's t-test to samples from two populations.
Syntax
sql
welchTTest([confidence_level])(sample_data, sample_index)
Values of both samples are in the
sample_data
column. If
sample_index
equals to 0 then the value in that row belongs to the sample from the first population. Otherwise it belongs to the sample from the second population.
The null hypothesis is that means of populations are equal. Normal distribution is assumed. Populations may have unequal variance.
Arguments
sample_data
β Sample data.
Integer
,
Float
or
Decimal
.
sample_index
β Sample index.
Integer
.
Parameters
confidence_level
β Confidence level in order to calculate confidence intervals.
Float
.
Returned values
Tuple
with two or four elements (if the optional
confidence_level
is specified)
calculated t-statistic.
Float64
.
calculated p-value.
Float64
.
calculated confidence-interval-low.
Float64
.
calculated confidence-interval-high.
Float64
.
Example
Input table:
text
ββsample_dataββ¬βsample_indexββ
β 20.3 β 0 β
β 22.1 β 0 β
β 21.9 β 0 β
β 18.9 β 1 β
β 20.3 β 1 β
β 19 β 1 β
βββββββββββββββ΄βββββββββββββββ
Query:
sql
SELECT welchTTest(sample_data, sample_index) FROM welch_ttest;
Result:
text
ββwelchTTest(sample_data, sample_index)ββββββ
β (2.7988719532211235,0.051807360348581945) β
βββββββββββββββββββββββββββββββββββββββββββββ
See Also
Welch's t-test
studentTTest function | {"source_file": "welchttest.md"} | [
-0.032613757997751236,
0.03227533772587776,
0.02910594642162323,
0.05886945128440857,
-0.03240475431084633,
-0.07400468736886978,
0.02187260426580906,
0.08962773531675339,
-0.07126189023256302,
0.02023087814450264,
0.07301274687051773,
-0.16880610585212708,
0.0766148492693901,
-0.106542274... |
06f4b418-fc80-437c-90e2-92fb9903a281 | description: 'Aggregate function that calculates PromQL-like rate over time series data on the specified grid.'
sidebar_position: 225
slug: /sql-reference/aggregate-functions/reference/timeSeriesRateToGrid
title: 'timeSeriesRateToGrid'
doc_type: 'reference'
Aggregate function that takes time series data as pairs of timestamps and values and calculates
PromQL-like rate
from this data on a regular time grid described by start timestamp, end timestamp and step. For each point on the grid the samples for calculating
rate
are considered within the specified time window.
Parameters:
-
start timestamp
- Specifies start of the grid.
-
end timestamp
- Specifies end of the grid.
-
grid step
- Specifies step of the grid in seconds.
-
staleness
- Specifies the maximum "staleness" in seconds of the considered samples. The staleness window is a left-open and right-closed interval.
Arguments:
-
timestamp
- timestamp of the sample
-
value
- value of the time series corresponding to the
timestamp
Return value:
rate
values on the specified grid as an
Array(Nullable(Float64))
. The returned array contains one value for each time grid point. The value is NULL if there are not enough samples within the window to calculate the rate value for a particular grid point.
Example:
The following query calculates
rate
values on the grid [90, 105, 120, 135, 150, 165, 180, 195, 210]:
sql
WITH
-- NOTE: the gap between 140 and 190 is to show how values are filled for ts = 150, 165, 180 according to window parameter
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values, -- array of values corresponding to timestamps above
90 AS start_ts, -- start of timestamp grid
90 + 120 AS end_ts, -- end of timestamp grid
15 AS step_seconds, -- step of timestamp grid
45 AS window_seconds -- "staleness" window
SELECT timeSeriesRateToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)
FROM
(
-- This subquery converts arrays of timestamps and values into rows of `timestamp`, `value`
SELECT
arrayJoin(arrayZip(timestamps, values)) AS ts_and_val,
ts_and_val.1 AS timestamp,
ts_and_val.2 AS value
);
Response:
response
ββtimeSeriesRateToGrid(start_ts, β―w_seconds)(timestamps, values)ββ
1. β [NULL,NULL,0,0.06666667,0.1,0.083333336,NULL,NULL,0.083333336] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments:
sql
WITH
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values,
90 AS start_ts,
90 + 120 AS end_ts,
15 AS step_seconds,
45 AS window_seconds
SELECT timeSeriesRateToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamps, values); | {"source_file": "timeSeriesRateToGrid.md"} | [
-0.09936074167490005,
0.01302503701299429,
-0.12421011924743652,
0.08040842413902283,
-0.038658250123262405,
-0.050394535064697266,
-0.04882775992155075,
0.03516463562846184,
0.036712948232889175,
-0.01822643168270588,
-0.034316424280405045,
-0.05301322788000107,
-0.014664321206510067,
-0.... |
e82c681f-e781-405d-9d64-b447fec585e7 | :::note
This function is experimental, enable it by setting
allow_experimental_ts_to_grid_aggregate_function=true
.
::: | {"source_file": "timeSeriesRateToGrid.md"} | [
-0.046173498034477234,
-0.03719428926706314,
-0.013469633646309376,
0.09186484664678574,
0.02236771211028099,
0.004247533623129129,
0.002288517076522112,
-0.07123781740665436,
0.0007586099090985954,
0.07126602530479431,
-0.00610651820898056,
-0.012566006742417812,
-0.005058068782091141,
-0... |
e374f1e2-a188-4e54-9f7f-26cc259917b6 | description: 'Calculates the moving sum of input values.'
sidebar_position: 144
slug: /sql-reference/aggregate-functions/reference/grouparraymovingsum
title: 'groupArrayMovingSum'
doc_type: 'reference'
groupArrayMovingSum
Calculates the moving sum of input values.
sql
groupArrayMovingSum(numbers_for_summing)
groupArrayMovingSum(window_size)(numbers_for_summing)
The function can take the window size as a parameter. If left unspecified, the function takes the window size equal to the number of rows in the column.
Arguments
numbers_for_summing
β
Expression
resulting in a numeric data type value.
window_size
β Size of the calculation window.
Returned values
Array of the same size and type as the input data.
Example
The sample table:
sql
CREATE TABLE t
(
`int` UInt8,
`float` Float32,
`dec` Decimal32(2)
)
ENGINE = TinyLog
text
ββintββ¬βfloatββ¬ββdecββ
β 1 β 1.1 β 1.10 β
β 2 β 2.2 β 2.20 β
β 4 β 4.4 β 4.40 β
β 7 β 7.77 β 7.77 β
βββββββ΄ββββββββ΄βββββββ
The queries:
sql
SELECT
groupArrayMovingSum(int) AS I,
groupArrayMovingSum(float) AS F,
groupArrayMovingSum(dec) AS D
FROM t
text
ββIβββββββββββ¬βFββββββββββββββββββββββββββββββββ¬βDβββββββββββββββββββββββ
β [1,3,7,14] β [1.1,3.3000002,7.7000003,15.47] β [1.10,3.30,7.70,15.47] β
ββββββββββββββ΄ββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ
sql
SELECT
groupArrayMovingSum(2)(int) AS I,
groupArrayMovingSum(2)(float) AS F,
groupArrayMovingSum(2)(dec) AS D
FROM t
text
ββIβββββββββββ¬βFββββββββββββββββββββββββββββββββ¬βDβββββββββββββββββββββββ
β [1,3,6,11] β [1.1,3.3000002,6.6000004,12.17] β [1.10,3.30,6.60,12.17] β
ββββββββββββββ΄ββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ | {"source_file": "grouparraymovingsum.md"} | [
0.002686504740267992,
-0.0016630274476483464,
-0.056425463408231735,
0.02876385487616062,
-0.048427071422338486,
-0.012993180193006992,
0.05840214714407921,
0.030741803348064423,
-0.02186616137623787,
-0.010487508028745651,
-0.04202849790453911,
-0.00892368983477354,
0.018168671056628227,
... |
9702b704-868f-43af-9381-cba851945f17 | description: 'Calculations the AND of a bitmap column, return cardinality of type
UInt64, if add suffix -State, then return a bitmap object.'
sidebar_position: 149
slug: /sql-reference/aggregate-functions/reference/groupbitmapand
title: 'groupBitmapAnd'
doc_type: 'reference'
Calculations the AND of a bitmap column, return cardinality of type UInt64, if add suffix -State, then return
bitmap object
.
sql
groupBitmapAnd(expr)
Arguments
expr
β An expression that results in
AggregateFunction(groupBitmap, UInt*)
type.
Return value
Value of the
UInt64
type.
Example
```sql
DROP TABLE IF EXISTS bitmap_column_expr_test2;
CREATE TABLE bitmap_column_expr_test2
(
tag_id String,
z AggregateFunction(groupBitmap, UInt32)
)
ENGINE = MergeTree
ORDER BY tag_id;
INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] AS Array(UInt32))));
INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] AS Array(UInt32))));
INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] AS Array(UInt32))));
SELECT groupBitmapAnd(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%');
ββgroupBitmapAnd(z)ββ
β 3 β
βββββββββββββββββββββ
SELECT arraySort(bitmapToArray(groupBitmapAndState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%');
ββarraySort(bitmapToArray(groupBitmapAndState(z)))ββ
β [6,8,10] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | {"source_file": "groupbitmapand.md"} | [
-0.0031174607574939728,
0.06190905347466469,
-0.04606771841645241,
0.03935179114341736,
-0.059484388679265976,
-0.039254263043403625,
0.04902098327875137,
0.03211662918329239,
-0.1071983352303505,
0.009401102550327778,
0.044320110231637955,
-0.1261141449213028,
0.06021483242511749,
-0.0685... |
e672893c-31eb-4746-8a23-01103e3d26cc | description: 'Returns an array of the approximately most frequent values in the specified
column. The resulting array is sorted in descending order of approximate frequency
of values (not by the values themselves). Additionally, the weight of the value
is taken into account.'
sidebar_position: 203
slug: /sql-reference/aggregate-functions/reference/topkweighted
title: 'topKWeighted'
doc_type: 'reference'
topKWeighted
Returns an array of the approximately most frequent values in the specified column. The resulting array is sorted in descending order of approximate frequency of values (not by the values themselves). Additionally, the weight of the value is taken into account.
Syntax
sql
topKWeighted(N)(column, weight)
topKWeighted(N, load_factor)(column, weight)
topKWeighted(N, load_factor, 'counts')(column, weight)
Parameters
N
β The number of elements to return. Optional. Default value: 10.
load_factor
β Defines, how many cells reserved for values. If uniq(column) > N * load_factor, result of topK function will be approximate. Optional. Default value: 3.
counts
β Defines, should result contain approximate count and error value.
Arguments
column
β The value.
weight
β The weight. Every value is accounted
weight
times for frequency calculation.
UInt64
.
Returned value
Returns an array of the values with maximum approximate sum of weights.
Example
Query:
sql
SELECT topKWeighted(2)(k, w) FROM
VALUES('k Char, w UInt64', ('y', 1), ('y', 1), ('x', 5), ('y', 1), ('z', 10))
Result:
text
ββtopKWeighted(2)(k, w)βββ
β ['z','x'] β
ββββββββββββββββββββββββββ
Query:
sql
SELECT topKWeighted(2, 10, 'counts')(k, w)
FROM VALUES('k Char, w UInt64', ('y', 1), ('y', 1), ('x', 5), ('y', 1), ('z', 10))
Result:
text
ββtopKWeighted(2, 10, 'counts')(k, w)ββ
β [('z',10,0),('x',5,0)] β
βββββββββββββββββββββββββββββββββββββββ
See Also
topK
approx_top_k
approx_top_sum | {"source_file": "topkweighted.md"} | [
0.030583515763282776,
-0.00028166931588202715,
-0.03585759177803993,
0.07619401812553406,
-0.04922748729586601,
-0.016786394640803337,
0.04914052039384842,
0.09575086086988449,
-0.013191403821110725,
0.004181621130555868,
-0.022225789725780487,
0.007363727316260338,
0.07501229643821716,
-0... |
d954a745-dcf0-4fdd-81b1-7ec9631eb198 | description: 'Aggregate function that calculates PromQL-like derivative over time series data on the specified grid.'
sidebar_position: 227
slug: /sql-reference/aggregate-functions/reference/timeSeriesDerivToGrid
title: 'timeSeriesDerivToGrid'
doc_type: 'reference'
Aggregate function that takes time series data as pairs of timestamps and values and calculates
PromQL-like derivative
from this data on a regular time grid described by start timestamp, end timestamp and step. For each point on the grid the samples for calculating
deriv
are considered within the specified time window.
Parameters:
-
start timestamp
- Specifies start of the grid.
-
end timestamp
- Specifies end of the grid.
-
grid step
- Specifies step of the grid in seconds.
-
staleness
- Specifies the maximum "staleness" in seconds of the considered samples. The staleness window is a left-open and right-closed interval.
Arguments:
-
timestamp
- timestamp of the sample
-
value
- value of the time series corresponding to the
timestamp
Return value:
deriv
values on the specified grid as an
Array(Nullable(Float64))
. The returned array contains one value for each time grid point. The value is NULL if there are not enough samples within the window to calculate the derivative value for a particular grid point.
Example:
The following query calculates
deriv
values on the grid [90, 105, 120, 135, 150, 165, 180, 195, 210]:
sql
WITH
-- NOTE: the gap between 140 and 190 is to show how values are filled for ts = 150, 165, 180 according to window parameter
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values, -- array of values corresponding to timestamps above
90 AS start_ts, -- start of timestamp grid
90 + 120 AS end_ts, -- end of timestamp grid
15 AS step_seconds, -- step of timestamp grid
45 AS window_seconds -- "staleness" window
SELECT timeSeriesDerivToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)
FROM
(
-- This subquery converts arrays of timestamps and values into rows of `timestamp`, `value`
SELECT
arrayJoin(arrayZip(timestamps, values)) AS ts_and_val,
ts_and_val.1 AS timestamp,
ts_and_val.2 AS value
);
Response:
response
ββtimeSeriesDerivToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)ββ
1. β [NULL,NULL,0,0.1,0.11,0.15,NULL,NULL,0.15] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments: | {"source_file": "timeSeriesDerivToGrid.md"} | [
-0.11825423687696457,
0.008377665653824806,
-0.07488870620727539,
0.08784772455692291,
-0.02060866542160511,
-0.043255411088466644,
-0.012761160731315613,
0.06854338943958282,
0.012661846354603767,
-0.008204840123653412,
-0.046324536204338074,
-0.07065770775079727,
-0.005913851782679558,
-... |
f31a8a76-045a-4149-8f97-776c328138f5 | Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments:
sql
WITH
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values,
90 AS start_ts,
90 + 120 AS end_ts,
15 AS step_seconds,
45 AS window_seconds
SELECT timeSeriesDerivToGrid(start_ts, end_ts, step_seconds, window_seconds)(timestamps, values);
:::note
This function is experimental, enable it by setting
allow_experimental_ts_to_grid_aggregate_function=true
.
::: | {"source_file": "timeSeriesDerivToGrid.md"} | [
-0.0409311018884182,
0.03310738876461983,
-0.029888227581977844,
0.0275469571352005,
-0.043774593621492386,
0.021450240164995193,
0.01341960672289133,
-0.0012067410862073302,
-0.005682162009179592,
-0.021820148453116417,
-0.08521782606840134,
-0.09914921224117279,
-0.02843405492603779,
-0.... |
b8f1acba-e212-47aa-b10a-ce04f88bdf9a | description: 'Calculates a list of distinct paths stored in a JSON column.'
sidebar_position: 216
slug: /sql-reference/aggregate-functions/reference/distinctjsonpaths
title: 'distinctJSONPaths'
doc_type: 'reference'
distinctJSONPaths
Calculates a list of distinct paths stored in a
JSON
column.
Syntax
sql
distinctJSONPaths(json)
Arguments
json
β
JSON
column.
Returned Value
The sorted list of paths
Array(String)
.
Example
Query:
sql
DROP TABLE IF EXISTS test_json;
CREATE TABLE test_json(json JSON) ENGINE = Memory;
INSERT INTO test_json VALUES ('{"a" : 42, "b" : "Hello"}'), ('{"b" : [1, 2, 3], "c" : {"d" : {"e" : "2020-01-01"}}}'), ('{"a" : 43, "c" : {"d" : {"f" : [{"g" : 42}]}}}')
sql
SELECT distinctJSONPaths(json) FROM test_json;
Result:
reference
ββdistinctJSONPaths(json)ββββ
β ['a','b','c.d.e','c.d.f'] β
βββββββββββββββββββββββββββββ
distinctJSONPathsAndTypes
Calculates the list of distinct paths and their types stored in
JSON
column.
Syntax
sql
distinctJSONPathsAndTypes(json)
Arguments
json
β
JSON
column.
Returned Value
The sorted map of paths and types
Map(String, Array(String))
.
Example
Query:
sql
DROP TABLE IF EXISTS test_json;
CREATE TABLE test_json(json JSON) ENGINE = Memory;
INSERT INTO test_json VALUES ('{"a" : 42, "b" : "Hello"}'), ('{"b" : [1, 2, 3], "c" : {"d" : {"e" : "2020-01-01"}}}'), ('{"a" : 43, "c" : {"d" : {"f" : [{"g" : 42}]}}}')
sql
SELECT distinctJSONPathsAndTypes(json) FROM test_json;
Result:
reference
ββdistinctJSONPathsAndTypes(json)ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {'a':['Int64'],'b':['Array(Nullable(Int64))','String'],'c.d.e':['Date'],'c.d.f':['Array(JSON(max_dynamic_types=16, max_dynamic_paths=256))']} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Note
If JSON declaration contains paths with specified types, these paths will be always included in the result of
distinctJSONPaths/distinctJSONPathsAndTypes
functions even if input data didn't have values for these paths.
sql
DROP TABLE IF EXISTS test_json;
CREATE TABLE test_json(json JSON(a UInt32)) ENGINE = Memory;
INSERT INTO test_json VALUES ('{"b" : "Hello"}'), ('{"b" : "World", "c" : [1, 2, 3]}');
sql
SELECT json FROM test_json;
text
ββjsonβββββββββββββββββββββββββββββββββββ
β {"a":0,"b":"Hello"} β
β {"a":0,"b":"World","c":["1","2","3"]} β
βββββββββββββββββββββββββββββββββββββββββ
sql
SELECT distinctJSONPaths(json) FROM test_json;
text
ββdistinctJSONPaths(json)ββ
β ['a','b','c'] β
βββββββββββββββββββββββββββ
sql
SELECT distinctJSONPathsAndTypes(json) FROM test_json; | {"source_file": "distinctjsonpaths.md"} | [
0.004577265586704016,
0.004742828197777271,
0.023867357522249222,
0.06251713633537292,
-0.07723556458950043,
-0.015700317919254303,
-0.008939092978835106,
0.053781162947416306,
0.0376930795609951,
0.014523492194712162,
0.04875365272164345,
0.05384176969528198,
0.013415326364338398,
-0.0139... |
492243f5-c2be-4086-a896-609ce4b1ee87 | text
ββdistinctJSONPaths(json)ββ
β ['a','b','c'] β
βββββββββββββββββββββββββββ
sql
SELECT distinctJSONPathsAndTypes(json) FROM test_json;
text
ββdistinctJSONPathsAndTypes(json)βββββββββββββββββββββββββββββββββ
β {'a':['UInt32'],'b':['String'],'c':['Array(Nullable(Int64))']} β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "distinctjsonpaths.md"} | [
0.03145645186305046,
0.02442951500415802,
-0.020691093057394028,
0.027564851567149162,
-0.053363773971796036,
-0.014574490487575531,
0.0540362149477005,
-0.023840444162487984,
-0.01898396573960781,
-0.05098395422101021,
0.07475470006465912,
0.033438876271247864,
0.0024503974709659815,
0.05... |
3e9e101a-30be-4629-831a-0886e3f8620a | description: 'Applies Kolmogorov-Smirnov''s test to samples from two populations.'
sidebar_label: 'kolmogorovSmirnovTest'
sidebar_position: 156
slug: /sql-reference/aggregate-functions/reference/kolmogorovsmirnovtest
title: 'kolmogorovSmirnovTest'
doc_type: 'reference'
kolmogorovSmirnovTest
Applies Kolmogorov-Smirnov's test to samples from two populations.
Syntax
sql
kolmogorovSmirnovTest([alternative, computation_method])(sample_data, sample_index)
Values of both samples are in the
sample_data
column. If
sample_index
equals to 0 then the value in that row belongs to the sample from the first population. Otherwise it belongs to the sample from the second population.
Samples must belong to continuous, one-dimensional probability distributions.
Arguments
sample_data
β Sample data.
Integer
,
Float
or
Decimal
.
sample_index
β Sample index.
Integer
.
Parameters
alternative
β alternative hypothesis. (Optional, default:
'two-sided'
.)
String
.
Let F(x) and G(x) be the CDFs of the first and second distributions respectively.
'two-sided'
The null hypothesis is that samples come from the same distribution, e.g.
F(x) = G(x)
for all x.
And the alternative is that the distributions are not identical.
'greater'
The null hypothesis is that values in the first sample are
stochastically smaller
than those in the second one,
e.g. the CDF of first distribution lies above and hence to the left of that for the second one.
Which in fact means that
F(x) >= G(x)
for all x. And the alternative in this case is that
F(x) < G(x)
for at least one x.
'less'
.
The null hypothesis is that values in the first sample are
stochastically greater
than those in the second one,
e.g. the CDF of first distribution lies below and hence to the right of that for the second one.
Which in fact means that
F(x) <= G(x)
for all x. And the alternative in this case is that
F(x) > G(x)
for at least one x.
computation_method
β the method used to compute p-value. (Optional, default:
'auto'
.)
String
.
'exact'
- calculation is performed using precise probability distribution of the test statistics. Compute intensive and wasteful except for small samples.
'asymp'
(
'asymptotic'
) - calculation is performed using an approximation. For large sample sizes, the exact and asymptotic p-values are very similar.
'auto'
- the
'exact'
method is used when a maximum number of samples is less than 10'000.
Returned values
Tuple
with two elements:
calculated statistic.
Float64
.
calculated p-value.
Float64
.
Example
Query:
sql
SELECT kolmogorovSmirnovTest('less', 'exact')(value, num)
FROM
(
SELECT
randNormal(0, 10) AS value,
0 AS num
FROM numbers(10000)
UNION ALL
SELECT
randNormal(0, 10) AS value,
1 AS num
FROM numbers(10000)
)
Result: | {"source_file": "kolmogorovsmirnovtest.md"} | [
-0.024777386337518692,
0.009277474135160446,
-0.040468327701091766,
0.040790144354104996,
0.019582996144890785,
-0.046518828719854355,
0.04625158756971359,
0.03967238962650299,
-0.031736165285110474,
-0.014669124037027359,
0.06377337872982025,
-0.1504940241575241,
0.06517909467220306,
-0.1... |
b2f9f5df-0607-497c-b05e-d252848bf201 | Result:
text
ββkolmogorovSmirnovTest('less', 'exact')(value, num)ββ
β (0.009899999999999996,0.37528595205132287) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Note:
P-value is bigger than 0.05 (for confidence level of 95%), so null hypothesis is not rejected.
Query:
sql
SELECT kolmogorovSmirnovTest('two-sided', 'exact')(value, num)
FROM
(
SELECT
randStudentT(10) AS value,
0 AS num
FROM numbers(100)
UNION ALL
SELECT
randNormal(0, 10) AS value,
1 AS num
FROM numbers(100)
)
Result:
text
ββkolmogorovSmirnovTest('two-sided', 'exact')(value, num)ββ
β (0.4100000000000002,6.61735760482795e-8) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Note:
P-value is less than 0.05 (for confidence level of 95%), so null hypothesis is rejected.
See Also
Kolmogorov-Smirnov'test | {"source_file": "kolmogorovsmirnovtest.md"} | [
0.03882777690887451,
0.06978009641170502,
0.0031843555625528097,
0.008392459712922573,
0.02909967117011547,
-0.06135483831167221,
0.02976961061358452,
0.09047313779592514,
0.05800187215209007,
0.04620915278792381,
0.05062933266162872,
-0.16922280192375183,
0.042704835534095764,
-0.01791321... |
d2d7adc1-a25e-4eb5-86d9-75fc443556ea | description: 'Computes quantile of a numeric data sequence using linear interpolation,
taking into account the weight of each element.'
sidebar_position: 176
slug: /sql-reference/aggregate-functions/reference/quantileExactWeightedInterpolated
title: 'quantileExactWeightedInterpolated'
doc_type: 'reference'
quantileExactWeightedInterpolated
Computes
quantile
of a numeric data sequence using linear interpolation, taking into account the weight of each element.
To get the interpolated value, all the passed values are combined into an array, which are then sorted by their corresponding weights. Quantile interpolation is then performed using the
weighted percentile method
by building a cumulative distribution based on weights and then a linear interpolation is performed using the weights and the values to compute the quantiles.
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
We strongly recommend using
quantileExactWeightedInterpolated
instead of
quantileInterpolatedWeighted
because
quantileExactWeightedInterpolated
is more accurate than
quantileInterpolatedWeighted
. Here is an example:
sql
SELECT
quantileExactWeightedInterpolated(0.99)(number, 1),
quantile(0.99)(number),
quantileInterpolatedWeighted(0.99)(number, 1)
FROM numbers(9)
ββquantileExactWeightedInterpolated(0.99)(number, 1)ββ¬βquantile(0.99)(number)ββ¬βquantileInterpolatedWeighted(0.99)(number, 1)ββ
β 7.92 β 7.92 β 8 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββ
Syntax
sql
quantileExactWeightedInterpolated(level)(expr, weight)
Alias:
medianExactWeightedInterpolated
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
weight
β Column with weights of sequence members. Weight is a number of value occurrences with
Unsigned integer types
.
Returned value
Quantile of the specified level.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Input table:
text
ββnββ¬βvalββ
β 0 β 3 β
β 1 β 2 β
β 2 β 1 β
β 5 β 4 β
βββββ΄ββββββ
Result:
text
ββquantileExactWeightedInterpolated(n, val)ββ
β 1.5 β
βββββββββββββββββββββββββββββββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantileexactweightedinterpolated.md"} | [
-0.0950479581952095,
0.012859492562711239,
0.03463950380682945,
0.0074967979453504086,
-0.10303040593862534,
-0.05821872875094414,
0.02145226299762726,
0.04643167927861214,
-0.0093600545078516,
-0.02059185691177845,
-0.0445609949529171,
-0.06718650460243225,
0.040777552872896194,
-0.052252... |
b9462ac9-e48c-4e62-aa47-285ddfaf5ce0 | description: 'Applies the Largest-Triangle-Three-Buckets algorithm to the input data.'
sidebar_label: 'largestTriangleThreeBuckets'
sidebar_position: 159
slug: /sql-reference/aggregate-functions/reference/largestTriangleThreeBuckets
title: 'largestTriangleThreeBuckets'
doc_type: 'reference'
largestTriangleThreeBuckets
Applies the
Largest-Triangle-Three-Buckets
algorithm to the input data.
The algorithm is used for downsampling time series data for visualization. It is designed to operate on series sorted by x coordinate.
It works by dividing the sorted series into buckets and then finding the largest triangle in each bucket. The number of buckets is equal to the number of points in the resulting series.
the function will sort data by
x
and then apply the downsampling algorithm to the sorted data.
Syntax
sql
largestTriangleThreeBuckets(n)(x, y)
Alias:
lttb
.
Arguments
x
β x coordinate.
Integer
,
Float
,
Decimal
,
Date
,
Date32
,
DateTime
,
DateTime64
.
y
β y coordinate.
Integer
,
Float
,
Decimal
,
Date
,
Date32
,
DateTime
,
DateTime64
.
NaNs are ignored in the provided series, meaning that any NaN values will be excluded from the analysis. This ensures that the function operates only on valid numerical data.
Parameters
n
β number of points in the resulting series.
UInt64
.
Returned values
Array
of
Tuple
with two elements:
Example
Input table:
text
ββββββxββββββββ¬βββββββyβββββββ
β 1.000000000 β 10.000000000 β
β 2.000000000 β 20.000000000 β
β 3.000000000 β 15.000000000 β
β 8.000000000 β 60.000000000 β
β 9.000000000 β 55.000000000 β
β 10.00000000 β 70.000000000 β
β 4.000000000 β 30.000000000 β
β 5.000000000 β 40.000000000 β
β 6.000000000 β 35.000000000 β
β 7.000000000 β 50.000000000 β
βββββββββββββββ΄βββββββββββββββ
Query:
sql
SELECT largestTriangleThreeBuckets(4)(x, y) FROM largestTriangleThreeBuckets_test;
Result:
text
βββββββββlargestTriangleThreeBuckets(4)(x, y)ββββββββββββ
β [(1,10),(3,15),(9,55),(10,70)] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "largestTriangleThreeBuckets.md"} | [
-0.036604247987270355,
-0.006202886812388897,
0.019634302705526352,
-0.08053184300661087,
-0.09713971614837646,
-0.03915547579526901,
-0.02198069542646408,
0.06554249674081802,
-0.05456152558326721,
-0.017553232610225677,
-0.04982081800699234,
0.01650259643793106,
0.006694546435028315,
-0.... |
2645b279-cac0-4dd0-9b69-49fb1dd4bcb6 | description: 'Returns an array of the approximately most frequent values and their
counts in the specified column.'
sidebar_position: 108
slug: /sql-reference/aggregate-functions/reference/approxtopsum
title: 'approx_top_sum'
doc_type: 'reference'
approx_top_sum
Returns an array of the approximately most frequent values and their counts in the specified column. The resulting array is sorted in descending order of approximate frequency of values (not by the values themselves). Additionally, the weight of the value is taken into account.
sql
approx_top_sum(N)(column, weight)
approx_top_sum(N, reserved)(column, weight)
This function does not provide a guaranteed result. In certain situations, errors might occur and it might return frequent values that aren't the most frequent values.
We recommend using the
N < 10
value; performance is reduced with large
N
values. Maximum value of
N = 65536
.
Parameters
N
β The number of elements to return. Optional. Default value: 10.
reserved
β Defines, how many cells reserved for values. If uniq(column) > reserved, result of topK function will be approximate. Optional. Default value: N * 3.
Arguments
column
β The value to calculate frequency.
weight
β The weight. Every value is accounted
weight
times for frequency calculation.
UInt64
.
Example
Query:
sql
SELECT approx_top_sum(2)(k, w)
FROM VALUES('k Char, w UInt64', ('y', 1), ('y', 1), ('x', 5), ('y', 1), ('z', 10))
Result:
text
ββapprox_top_sum(2)(k, w)ββ
β [('z',10,0),('x',5,0)] β
βββββββββββββββββββββββββββ
See Also
topK
topKWeighted
approx_top_k | {"source_file": "approxtopsum.md"} | [
0.008834618143737316,
0.006350921932607889,
-0.06047271937131882,
0.03425231948494911,
-0.06593213975429535,
-0.014785096980631351,
0.03909321501851082,
0.0889049619436264,
0.008391804061830044,
-0.008638535626232624,
-0.017388101667165756,
0.00427367864176631,
0.07944098114967346,
-0.1080... |
7a9bd1af-7271-43c6-a8d3-44cf2b6b1bab | description: 'Calculates the value of
Ξ£((x - xΜ
)(y - yΜ
)) / (n - 1)
'
sidebar_position: 124
slug: /sql-reference/aggregate-functions/reference/covarsamp
title: 'covarSamp'
doc_type: 'reference'
covarSamp
Calculates the value of
Ξ£((x - xΜ
)(y - yΜ
)) / (n - 1)
.
:::note
This function uses a numerically unstable algorithm. If you need
numerical stability
in calculations, use the
covarSampStable
function. It works slower but provides a lower computational error.
:::
Syntax
sql
covarSamp(x, y)
Arguments
x
β first variable.
(U)Int*
,
Float*
,
Decimal
.
y
β second variable.
(U)Int*
,
Float*
,
Decimal
.
Returned Value
The sample covariance between
x
and
y
. For
n <= 1
,
nan
is returned.
Float64
.
Example
Query:
sql
DROP TABLE IF EXISTS series;
CREATE TABLE series(i UInt32, x_value Float64, y_value Float64) ENGINE = Memory;
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6,-4.4),(2, -9.6,3),(3, -1.3,-4),(4, 5.3,9.7),(5, 4.4,0.037),(6, -8.6,-7.8),(7, 5.1,9.3),(8, 7.9,-3.6),(9, -8.2,0.62),(10, -3,7.3);
sql
SELECT covarSamp(x_value, y_value)
FROM
(
SELECT
x_value,
y_value
FROM series
);
Result:
reference
ββcovarSamp(x_value, y_value)ββ
β 7.206275555555556 β
βββββββββββββββββββββββββββββββ
Query:
```sql
SELECT covarSamp(x_value, y_value)
FROM
(
SELECT
x_value,
y_value
FROM series LIMIT 1
);
```
Result:
reference
ββcovarSamp(x_value, y_value)ββ
β nan β
βββββββββββββββββββββββββββββββ | {"source_file": "covarsamp.md"} | [
-0.018267236649990082,
-0.07054412364959717,
-0.017068926244974136,
-0.03957716003060341,
-0.08428805321455002,
-0.04656225070357323,
0.07553210109472275,
0.0689333900809288,
-0.07108767330646515,
0.014172999188303947,
0.027475109323859215,
-0.008791235275566578,
0.022742753848433495,
-0.0... |
6c51f043-d166-4286-a6f3-ec82de2192d9 | description: 'Calculations the OR of a bitmap column, return cardinality of type UInt64,
if add suffix -State, then return a bitmap object. This is equivalent to
groupBitmapMerge
.'
sidebar_position: 150
slug: /sql-reference/aggregate-functions/reference/groupbitmapor
title: 'groupBitmapOr'
doc_type: 'reference'
groupBitmapOr
Calculations the OR of a bitmap column, return cardinality of type UInt64, if add suffix -State, then return a
bitmap object
. This is equivalent to
groupBitmapMerge
.
sql
groupBitmapOr(expr)
Arguments
expr
β An expression that results in
AggregateFunction(groupBitmap, UInt*)
type.
Returned value
Value of the
UInt64
type.
Example
```sql
DROP TABLE IF EXISTS bitmap_column_expr_test2;
CREATE TABLE bitmap_column_expr_test2
(
tag_id String,
z AggregateFunction(groupBitmap, UInt32)
)
ENGINE = MergeTree
ORDER BY tag_id;
INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] AS Array(UInt32))));
INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] AS Array(UInt32))));
INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] AS Array(UInt32))));
SELECT groupBitmapOr(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%');
ββgroupBitmapOr(z)ββ
β 15 β
ββββββββββββββββββββ
SELECT arraySort(bitmapToArray(groupBitmapOrState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%');
ββarraySort(bitmapToArray(groupBitmapOrState(z)))ββ
β [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | {"source_file": "groupbitmapor.md"} | [
-0.005227779969573021,
0.05620671808719635,
-0.059321168810129166,
0.06103077158331871,
-0.059922993183135986,
-0.031332828104496,
0.04134466126561165,
0.04226319119334221,
-0.08867383748292923,
0.007152724079787731,
0.0349983274936676,
-0.11218523979187012,
0.07635483890771866,
-0.0568072... |
561b6cb4-4d9b-413a-8546-e61452203f96 | description: 'Calculate the sample variance of a data set.'
sidebar_position: 212
slug: /sql-reference/aggregate-functions/reference/varSamp
title: 'varSamp'
doc_type: 'reference'
varSamp {#varsamp}
Calculate the sample variance of a data set.
Syntax
sql
varSamp(x)
Alias:
VAR_SAMP
.
Parameters
x
: The population for which you want to calculate the sample variance.
(U)Int*
,
Float*
,
Decimal*
.
Returned value
Returns the sample variance of the input data set
x
.
Float64
.
Implementation details
The
varSamp
function calculates the sample variance using the following formula:
$$
\sum\frac{(x - \text{mean}(x))^2}{(n - 1)}
$$
Where:
x
is each individual data point in the data set.
mean(x)
is the arithmetic mean of the data set.
n
is the number of data points in the data set.
The function assumes that the input data set represents a sample from a larger population. If you want to calculate the variance of the entire population (when you have the complete data set), you should use
varPop
instead.
Example
Query:
```sql
DROP TABLE IF EXISTS test_data;
CREATE TABLE test_data
(
x Float64
)
ENGINE = Memory;
INSERT INTO test_data VALUES (10.5), (12.3), (9.8), (11.2), (10.7);
SELECT round(varSamp(x),3) AS var_samp FROM test_data;
```
Response:
response
ββvar_sampββ
β 0.865 β
ββββββββββββ | {"source_file": "varsamp.md"} | [
0.01412949152290821,
-0.0408620685338974,
0.019355401396751404,
-0.005380680784583092,
-0.06643829494714737,
0.026503760367631912,
0.06379900872707367,
0.10464812070131302,
-0.016103215515613556,
0.011006677523255348,
0.019061513245105743,
0.018054183572530746,
0.016182122752070427,
-0.090... |
a26e7d90-e099-4dd3-a790-e0006a425914 | description: 'Calculates Cramer''s V, but uses a bias correction.'
sidebar_position: 128
slug: /sql-reference/aggregate-functions/reference/cramersvbiascorrected
title: 'cramersVBiasCorrected'
doc_type: 'reference'
cramersVBiasCorrected
Cramer's V is a measure of association between two columns in a table. The result of the
cramersV
function
ranges from 0 (corresponding to no association between the variables) to 1 and can reach 1 only when each value is completely determined by the other. The function can be heavily biased, so this version of Cramer's V uses the
bias correction
.
Syntax
sql
cramersVBiasCorrected(column1, column2)
Parameters
column1
: first column to be compared.
column2
: second column to be compared.
Returned value
a value between 0 (corresponding to no association between the columns' values) to 1 (complete association).
Type: always
Float64
.
Example
The following two columns being compared below have a moderate association with each other. Notice the result of
cramersVBiasCorrected
is smaller than the result of
cramersV
:
Query:
sql
SELECT
cramersV(a, b),
cramersVBiasCorrected(a ,b)
FROM
(
SELECT
number % 10 AS a,
number % 4 AS b
FROM
numbers(150)
);
Result:
response
ββββββcramersV(a, b)ββ¬βcramersVBiasCorrected(a, b)ββ
β 0.5798088336225178 β 0.5305112825189074 β
ββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββ | {"source_file": "cramersvbiascorrected.md"} | [
0.0214606374502182,
-0.017864767462015152,
-0.09156597405672073,
0.024793952703475952,
0.014701498672366142,
0.024925265461206436,
0.010374684818089008,
0.012694359757006168,
-0.016861123964190483,
0.05344557762145996,
-0.028429904952645302,
-0.01520291343331337,
0.048198070377111435,
-0.0... |
12847796-2069-4d1f-a367-efb64020379c | description: 'quantiles, quantilesExactExclusive, quantilesExactInclusive, quantilesGK'
sidebar_position: 177
slug: /sql-reference/aggregate-functions/reference/quantiles
title: 'quantiles Functions'
doc_type: 'reference'
quantiles functions
quantiles {#quantiles}
Syntax:
quantiles(level1, level2, ...)(x)
All the quantile functions also have corresponding quantiles functions:
quantiles
,
quantilesDeterministic
,
quantilesTiming
,
quantilesTimingWeighted
,
quantilesExact
,
quantilesExactWeighted
,
quantileExactWeightedInterpolated
,
quantileInterpolatedWeighted
,
quantilesTDigest
,
quantilesBFloat16
,
quantilesDD
. These functions calculate all the quantiles of the listed levels in one pass, and return an array of the resulting values.
quantilesExactExclusive {#quantilesexactexclusive}
Exactly computes the
quantiles
of a numeric data sequence.
To get exact value, all the passed values ββare combined into an array, which is then partially sorted. Therefore, the function consumes
O(n)
memory, where
n
is a number of values that were passed. However, for a small number of values, the function is very effective.
This function is equivalent to
PERCENTILE.EXC
Excel function, (
type R6
).
Works more efficiently with sets of levels than
quantileExactExclusive
.
Syntax
sql
quantilesExactExclusive(level1, level2, ...)(expr)
Arguments
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
Parameters
level
β Levels of quantiles. Possible values: (0, 1) β bounds not included.
Float
.
Returned value
Array
of quantiles of the specified levels.
Type of array values:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Query:
```sql
CREATE TABLE num AS numbers(1000);
SELECT quantilesExactExclusive(0.25, 0.5, 0.75, 0.9, 0.95, 0.99, 0.999)(x) FROM (SELECT number AS x FROM num);
```
Result:
text
ββquantilesExactExclusive(0.25, 0.5, 0.75, 0.9, 0.95, 0.99, 0.999)(x)ββ
β [249.25,499.5,749.75,899.9,949.9499999999999,989.99,998.999] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
quantilesExactInclusive {#quantilesexactinclusive}
Exactly computes the
quantiles
of a numeric data sequence.
To get exact value, all the passed values ββare combined into an array, which is then partially sorted. Therefore, the function consumes
O(n)
memory, where
n
is a number of values that were passed. However, for a small number of values, the function is very effective.
This function is equivalent to
PERCENTILE.INC
Excel function, (
type R7
).
Works more efficiently with sets of levels than
quantileExactInclusive
.
Syntax
sql
quantilesExactInclusive(level1, level2, ...)(expr)
Arguments
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
Parameters | {"source_file": "quantiles.md"} | [
-0.08362387865781784,
-0.0103496965020895,
-0.00289948214776814,
-0.01652245596051216,
-0.05156091973185539,
-0.08305904269218445,
0.04880102351307869,
0.04230230301618576,
-0.04995939880609512,
-0.028769660741090775,
-0.024541044607758522,
-0.11246190965175629,
0.010814009234309196,
-0.02... |
c3d0a0a3-9fdc-4120-b717-66a3efcf1ea9 | Syntax
sql
quantilesExactInclusive(level1, level2, ...)(expr)
Arguments
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
Parameters
level
β Levels of quantiles. Possible values: [0, 1] β bounds included.
Float
.
Returned value
Array
of quantiles of the specified levels.
Type of array values:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Query:
```sql
CREATE TABLE num AS numbers(1000);
SELECT quantilesExactInclusive(0.25, 0.5, 0.75, 0.9, 0.95, 0.99, 0.999)(x) FROM (SELECT number AS x FROM num);
```
Result:
text
ββquantilesExactInclusive(0.25, 0.5, 0.75, 0.9, 0.95, 0.99, 0.999)(x)ββ
β [249.75,499.5,749.25,899.1,949.05,989.01,998.001] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
quantilesGK {#quantilesgk}
quantilesGK
works similarly with
quantileGK
but allows us to calculate quantities at different levels simultaneously and returns an array.
Syntax
sql
quantilesGK(accuracy, level1, level2, ...)(expr)
Returned value
Array
of quantiles of the specified levels.
Type of array values:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Query:
```sql
SELECT quantilesGK(1, 0.25, 0.5, 0.75)(number + 1)
FROM numbers(1000)
ββquantilesGK(1, 0.25, 0.5, 0.75)(plus(number, 1))ββ
β [1,1,1] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
SELECT quantilesGK(10, 0.25, 0.5, 0.75)(number + 1)
FROM numbers(1000)
ββquantilesGK(10, 0.25, 0.5, 0.75)(plus(number, 1))ββ
β [156,413,659] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββ
SELECT quantilesGK(100, 0.25, 0.5, 0.75)(number + 1)
FROM numbers(1000)
ββquantilesGK(100, 0.25, 0.5, 0.75)(plus(number, 1))ββ
β [251,498,741] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
SELECT quantilesGK(1000, 0.25, 0.5, 0.75)(number + 1)
FROM numbers(1000)
ββquantilesGK(1000, 0.25, 0.5, 0.75)(plus(number, 1))ββ
β [249,499,749] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | {"source_file": "quantiles.md"} | [
-0.022206395864486694,
0.04742470011115074,
0.030252715572714806,
0.020491985604166985,
-0.05529246851801872,
-0.0518597774207592,
0.0931159034371376,
0.03920642286539078,
0.020636001601815224,
-0.02874417044222355,
0.04495001211762428,
-0.09678781032562256,
-0.001832147710956633,
0.031062... |
dce4abe7-b578-4984-adbc-9dc3574ccf69 | description: 'Aggregate function that calculates PromQL-like linear prediction over time series data on the specified grid.'
sidebar_position: 228
slug: /sql-reference/aggregate-functions/reference/timeSeriesPredictLinearToGrid
title: 'timeSeriesPredictLinearToGrid'
doc_type: 'reference'
Aggregate function that takes time series data as pairs of timestamps and values and calculates a
PromQL-like linear prediction
with a specified prediction timestamp offset from this data on a regular time grid described by start timestamp, end timestamp and step. For each point on the grid the samples for calculating
predict_linear
are considered within the specified time window.
Parameters:
-
start timestamp
- Specifies start of the grid.
-
end timestamp
- Specifies end of the grid.
-
grid step
- Specifies step of the grid in seconds.
-
staleness
- Specifies the maximum "staleness" in seconds of the considered samples. The staleness window is a left-open and right-closed interval.
-
predict_offset
- Specifies number of seconds of offset to add to prediction time.
Arguments:
-
timestamp
- timestamp of the sample
-
value
- value of the time series corresponding to the
timestamp
Return value:
predict_linear
values on the specified grid as an
Array(Nullable(Float64))
. The returned array contains one value for each time grid point. The value is NULL if there are not enough samples within the window to calculate the rate value for a particular grid point.
Example:
The following query calculates
predict_linear
values on the grid [90, 105, 120, 135, 150, 165, 180, 195, 210] with a 60 second offset:
sql
WITH
-- NOTE: the gap between 140 and 190 is to show how values are filled for ts = 150, 165, 180 according to window parameter
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values, -- array of values corresponding to timestamps above
90 AS start_ts, -- start of timestamp grid
90 + 120 AS end_ts, -- end of timestamp grid
15 AS step_seconds, -- step of timestamp grid
45 AS window_seconds, -- "staleness" window
60 AS predict_offset -- prediction time offset
SELECT timeSeriesPredictLinearToGrid(start_ts, end_ts, step_seconds, window_seconds, predict_offset)(timestamp, value)
FROM
(
-- This subquery converts arrays of timestamps and values into rows of `timestamp`, `value`
SELECT
arrayJoin(arrayZip(timestamps, values)) AS ts_and_val,
ts_and_val.1 AS timestamp,
ts_and_val.2 AS value
);
Response:
response
ββtimeSeriesPredictLinearToGrid(start_ts, end_ts, step_seconds, window_seconds, predict_offset)(timestamp, value)ββ
1. β [NULL,NULL,1,9.166667,11.6,16.916666,NULL,NULL,16.5] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "timeSeriesPredictLinearToGrid.md"} | [
-0.10025354474782944,
-0.04745491221547127,
-0.07679856568574905,
0.08674167096614838,
-0.008301829919219017,
-0.003314773552119732,
-0.0224081352353096,
0.06031016632914543,
-0.030573207885026932,
0.00565925519913435,
-0.03792901337146759,
-0.052121467888355255,
0.00637437216937542,
-0.01... |
6fcabad5-32e7-474b-8235-b2b9930a17f6 | Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments:
sql
WITH
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values,
90 AS start_ts,
90 + 120 AS end_ts,
15 AS step_seconds,
45 AS window_seconds,
60 AS predict_offset
SELECT timeSeriesPredictLinearToGrid(start_ts, end_ts, step_seconds, window_seconds, predict_offset)(timestamps, values);
:::note
This function is experimental, enable it by setting
allow_experimental_ts_to_grid_aggregate_function=true
.
::: | {"source_file": "timeSeriesPredictLinearToGrid.md"} | [
-0.021276213228702545,
-0.0053113363683223724,
-0.026025546714663506,
0.0407668799161911,
-0.008305104449391365,
0.006870747078210115,
0.03247526288032532,
0.0016548983985558152,
-0.04913587495684624,
-0.02533606067299843,
-0.06915578246116638,
-0.09858448058366776,
-0.021738141775131226,
... |
eb99566d-b8d0-4147-b426-ba577cd3b802 | description: 'Selects the last encountered value of a column.'
sidebar_position: 105
slug: /sql-reference/aggregate-functions/reference/anylast
title: 'anyLast'
doc_type: 'reference'
anyLast
Selects the last encountered value of a column.
:::warning
As a query can be executed in arbitrary order, the result of this function is non-deterministic.
If you need an arbitrary but deterministic result, use functions
min
or
max
.
:::
By default, the function never returns NULL, i.e. ignores NULL values in the input column.
However, if the function is used with the
RESPECT NULLS
modifier, it returns the first value reads no matter if NULL or not.
Syntax
sql
anyLast(column) [RESPECT NULLS]
Alias
anyLast(column)
(without
RESPECT NULLS
)
-
last_value
.
Aliases for
anyLast(column) RESPECT NULLS
-
anyLastRespectNulls
,
anyLast_respect_nulls
-
lastValueRespectNulls
,
last_value_respect_nulls
Parameters
-
column
: The column name.
Returned value
The last value encountered.
Example
Query:
```sql
CREATE TABLE tab (city Nullable(String)) ENGINE=Memory;
INSERT INTO tab (city) VALUES ('Amsterdam'),(NULL),('New York'),('Tokyo'),('Valencia'),(NULL);
SELECT anyLast(city), anyLastRespectNulls(city) FROM tab;
```
response
ββanyLast(city)ββ¬βanyLastRespectNulls(city)ββ
β Valencia β α΄Ία΅α΄Έα΄Έ β
βββββββββββββββββ΄ββββββββββββββββββββββββββββ | {"source_file": "anylast.md"} | [
-0.00539913447573781,
0.027894267812371254,
-0.010843535885214806,
0.012669478543102741,
-0.03341503441333771,
0.014751272276043892,
0.03561721742153168,
0.06871578097343445,
0.024072647094726562,
0.05527760460972786,
0.05149339511990547,
-0.04174341633915901,
0.007400284521281719,
-0.0603... |
14b5948f-c6d7-4d22-8876-f701ac334dc5 | description: 'Calculates the Pearson correlation coefficient, but uses a numerically
stable algorithm.'
sidebar_position: 119
slug: /sql-reference/aggregate-functions/reference/corrstable
title: 'corrStable'
doc_type: 'reference'
corrStable
Calculates the
Pearson correlation coefficient
:
$$
\frac{\Sigma{(x - \bar{x})(y - \bar{y})}}{\sqrt{\Sigma{(x - \bar{x})^2} * \Sigma{(y - \bar{y})^2}}}
$$
Similar to the
corr
function, but uses a numerically stable algorithm. As a result,
corrStable
is slower than
corr
but produces a more accurate result.
Syntax
sql
corrStable(x, y)
Arguments
x
β first variable.
(U)Int*
,
Float*
,
Decimal
.
y
β second variable.
(U)Int*
,
Float*
,
Decimal
.
Returned Value
The Pearson correlation coefficient.
Float64
.
*Example
Query:
sql
DROP TABLE IF EXISTS series;
CREATE TABLE series
(
i UInt32,
x_value Float64,
y_value Float64
)
ENGINE = Memory;
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6, -4.4),(2, -9.6, 3),(3, -1.3, -4),(4, 5.3, 9.7),(5, 4.4, 0.037),(6, -8.6, -7.8),(7, 5.1, 9.3),(8, 7.9, -3.6),(9, -8.2, 0.62),(10, -3, 7.3);
sql
SELECT corrStable(x_value, y_value)
FROM series;
Result:
response
ββcorrStable(x_value, y_value)ββ
β 0.17302657554532558 β
ββββββββββββββββββββββββββββββββ | {"source_file": "corrstable.md"} | [
-0.030455119907855988,
-0.09042900800704956,
-0.024862760677933693,
0.026596734300255775,
-0.0570794977247715,
-0.017288610339164734,
-0.010661625303328037,
0.013125898316502571,
-0.021860532462596893,
0.018585560843348503,
0.010914974845945835,
0.006962308660149574,
0.017826559022068977,
... |
2c3d6925-61fe-4186-a476-c26a38af02fd | description: 'The result is equal to the square root of varPop. Unlike stddevPop,
this function uses a numerically stable algorithm.'
sidebar_position: 189
slug: /sql-reference/aggregate-functions/reference/stddevpopstable
title: 'stddevPopStable'
doc_type: 'reference'
stddevPopStable
The result is equal to the square root of
varPop
. Unlike
stddevPop
, this function uses a numerically stable algorithm. It works slower but provides a lower computational error.
Syntax
sql
stddevPopStable(x)
Parameters
x
: Population of values to find the standard deviation of.
(U)Int*
,
Float*
,
Decimal*
.
Returned value
Square root of the variance of
x
.
Float64
.
Example
Query:
```sql
DROP TABLE IF EXISTS test_data;
CREATE TABLE test_data
(
population Float64,
)
ENGINE = Log;
INSERT INTO test_data SELECT randUniform(5.5, 10) FROM numbers(1000000)
SELECT
stddevPopStable(population) AS stddev
FROM test_data;
```
Result:
response
ββββββββββββββstddevββ
β 1.2999977786592576 β
ββββββββββββββββββββββ | {"source_file": "stddevpopstable.md"} | [
0.0014843952376395464,
-0.009620287455618382,
-0.030991384759545326,
0.04774032533168793,
-0.05886076018214226,
-0.0634559616446495,
0.023835720494389534,
0.06629997491836548,
0.002285481197759509,
-0.03129712492227554,
0.03966785594820976,
-0.015537984669208527,
0.0744004026055336,
-0.126... |
43c83fa6-a3f5-48e6-b3cb-2662951d77b9 | description: 'Aggregate function that calculates the maximum number of times that
a group of intervals intersects each other (if all the intervals intersect at least
once).'
sidebar_position: 163
slug: /sql-reference/aggregate-functions/reference/maxintersections
title: 'maxIntersections'
doc_type: 'reference'
maxIntersections
Aggregate function that calculates the maximum number of times that a group of intervals intersects each other (if all the intervals intersect at least once).
The syntax is:
sql
maxIntersections(start_column, end_column)
Arguments
start_column
β the numeric column that represents the start of each interval. If
start_column
is
NULL
or 0 then the interval will be skipped.
end_column
- the numeric column that represents the end of each interval. If
end_column
is
NULL
or 0 then the interval will be skipped.
Returned value
Returns the maximum number of intersected intervals.
Example
```sql
CREATE TABLE my_events (
start UInt32,
end UInt32
)
ENGINE = MergeTree
ORDER BY tuple();
INSERT INTO my_events VALUES
(1, 3),
(1, 6),
(2, 5),
(3, 7);
```
The intervals look like the following:
response
1 - 3
1 - - - - 6
2 - - 5
3 - - - 7
Three of these intervals have a common value (the value is
4
, but the value that is common is not important, we are measuring the count of the intersections). The intervals
(1,3)
and
(3,7)
share an endpoint but are not considered intersecting by the
maxIntersections
function.
sql
SELECT maxIntersections(start, end) FROM my_events;
Response:
response
3
If you have multiple occurrences of the maximum interval, you can use the
maxIntersectionsPosition
function
to locate the number and location of those occurrences. | {"source_file": "maxintersections.md"} | [
0.03940523415803909,
-0.039852291345596313,
0.031304825097322464,
-0.039669472724199295,
-0.08753857016563416,
-0.02461981773376465,
0.0023758376482874155,
0.06051500514149666,
-0.01994379237294197,
-0.011687899939715862,
0.001519288751296699,
-0.026733437553048134,
0.035011522471904755,
-... |
b879a131-0f98-4628-a57a-5f04d5bee42f | description: 'Aggregate function which builds a flamegraph using the list of stacktraces.'
sidebar_position: 138
slug: /sql-reference/aggregate-functions/reference/flame_graph
title: 'flameGraph'
doc_type: 'reference'
flameGraph
Aggregate function which builds a
flamegraph
using the list of stacktraces. Outputs an array of strings which can be used by
flamegraph.pl utility
to render an SVG of the flamegraph.
Syntax {#syntax}
sql
flameGraph(traces, [size], [ptr])
Parameters {#parameters}
traces
β a stacktrace.
Array
(
UInt64
).
size
β an allocation size for memory profiling. (optional - default
1
).
UInt64
.
ptr
β an allocation address. (optional - default
0
).
UInt64
.
:::note
In the case where
ptr != 0
, a flameGraph will map allocations (size > 0) and deallocations (size < 0) with the same size and ptr.
Only allocations which were not freed are shown. Non mapped deallocations are ignored.
:::
Returned value {#returned-value}
An array of strings for use with
flamegraph.pl utility
.
Array
(
String
).
Examples {#examples}
Building a flamegraph based on a CPU query profiler {#building-a-flamegraph-based-on-a-cpu-query-profiler}
sql
SET query_profiler_cpu_time_period_ns=10000000;
SELECT SearchPhrase, COUNT(DISTINCT UserID) AS u FROM hits WHERE SearchPhrase <> '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
text
clickhouse client --allow_introspection_functions=1 -q "select arrayJoin(flameGraph(arrayReverse(trace))) from system.trace_log where trace_type = 'CPU' and query_id = 'xxx'" | ~/dev/FlameGraph/flamegraph.pl > flame_cpu.svg
Building a flamegraph based on a memory query profiler, showing all allocations {#building-a-flamegraph-based-on-a-memory-query-profiler-showing-all-allocations}
sql
SET memory_profiler_sample_probability=1, max_untracked_memory=1;
SELECT SearchPhrase, COUNT(DISTINCT UserID) AS u FROM hits WHERE SearchPhrase <> '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
text
clickhouse client --allow_introspection_functions=1 -q "select arrayJoin(flameGraph(trace, size)) from system.trace_log where trace_type = 'MemorySample' and query_id = 'xxx'" | ~/dev/FlameGraph/flamegraph.pl --countname=bytes --color=mem > flame_mem.svg
Building a flamegraph based on a memory query profiler, showing allocations which were not deallocated in query context {#building-a-flamegraph-based-on-a-memory-query-profiler-showing-allocations-which-were-not-deallocated-in-query-context}
sql
SET memory_profiler_sample_probability=1, max_untracked_memory=1, use_uncompressed_cache=1, merge_tree_max_rows_to_use_cache=100000000000, merge_tree_max_bytes_to_use_cache=1000000000000;
SELECT SearchPhrase, COUNT(DISTINCT UserID) AS u FROM hits WHERE SearchPhrase <> '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10; | {"source_file": "flame_graph.md"} | [
0.020419461652636528,
-0.029629655182361603,
-0.08747866749763489,
0.06785795092582703,
-0.000986415077932179,
-0.025977354496717453,
0.027285777032375336,
0.03555365279316902,
-0.03483901545405388,
-0.03506810963153839,
-0.03996830806136131,
0.03280981630086899,
-0.03756888210773468,
-0.0... |
eeddf670-9ebf-4442-9c8b-8bc2fa29c61d | text
clickhouse client --allow_introspection_functions=1 -q "SELECT arrayJoin(flameGraph(trace, size, ptr)) FROM system.trace_log WHERE trace_type = 'MemorySample' AND query_id = 'xxx'" | ~/dev/FlameGraph/flamegraph.pl --countname=bytes --color=mem > flame_mem_untracked.svg
Build a flamegraph based on memory query profiler, showing active allocations at the fixed point of time {#build-a-flamegraph-based-on-memory-query-profiler-showing-active-allocations-at-the-fixed-point-of-time}
sql
SET memory_profiler_sample_probability=1, max_untracked_memory=1;
SELECT SearchPhrase, COUNT(DISTINCT UserID) AS u FROM hits WHERE SearchPhrase <> '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
1 - Memory usage per second
sql
SELECT event_time, m, formatReadableSize(max(s) AS m) FROM (SELECT event_time, sum(size) OVER (ORDER BY event_time) AS s FROM system.trace_log WHERE query_id = 'xxx' AND trace_type = 'MemorySample') GROUP BY event_time ORDER BY event_time;
2 - Find a time point with maximal memory usage
sql
SELECT argMax(event_time, s), max(s) FROM (SELECT event_time, sum(size) OVER (ORDER BY event_time) AS s FROM system.trace_log WHERE query_id = 'xxx' AND trace_type = 'MemorySample');
3 - Fix active allocations at fixed point of time
text
clickhouse client --allow_introspection_functions=1 -q "SELECT arrayJoin(flameGraph(trace, size, ptr)) FROM (SELECT * FROM system.trace_log WHERE trace_type = 'MemorySample' AND query_id = 'xxx' AND event_time <= 'yyy' ORDER BY event_time)" | ~/dev/FlameGraph/flamegraph.pl --countname=bytes --color=mem > flame_mem_time_point_pos.svg
4 - Find deallocations at fixed point of time
text
clickhouse client --allow_introspection_functions=1 -q "SELECT arrayJoin(flameGraph(trace, -size, ptr)) FROM (SELECT * FROM system.trace_log WHERE trace_type = 'MemorySample' AND query_id = 'xxx' AND event_time > 'yyy' ORDER BY event_time desc)" | ~/dev/FlameGraph/flamegraph.pl --countname=bytes --color=mem > flame_mem_time_point_neg.svg | {"source_file": "flame_graph.md"} | [
0.11489514261484146,
-0.03119163028895855,
-0.1000395268201828,
0.10827973484992981,
-0.026549262925982475,
-0.00478731282055378,
0.15181028842926025,
0.020688917487859726,
0.022168220952153206,
-0.008519545197486877,
-0.01618361659348011,
0.01584881730377674,
0.06888847053050995,
-0.01859... |
f76efe22-5748-48cc-98ab-299e74e7a648 | description: 'Aggregate function that calculates the minimum across a group of values.'
sidebar_position: 168
slug: /sql-reference/aggregate-functions/reference/min
title: 'min'
doc_type: 'reference'
Aggregate function that calculates the minimum across a group of values.
Example:
sql
SELECT min(salary) FROM employees;
sql
SELECT department, min(salary) FROM employees GROUP BY department;
If you need non-aggregate function to choose a minimum of two values, see
least
:
sql
SELECT least(a, b) FROM table; | {"source_file": "min.md"} | [
0.02315080724656582,
0.06707838177680969,
-0.0037347497418522835,
0.008322206325829029,
-0.09486990422010422,
0.023519402369856834,
-0.021073337644338608,
0.09842749685049057,
-0.014514919370412827,
-0.001749226008541882,
0.04979359358549118,
-0.0766933336853981,
0.0689723938703537,
-0.020... |
88a17c4f-6488-4cd3-9b22-7ebe19a7ce32 | description: 'Totals a
value
array according to the keys specified in the
key
array. Returns a tuple of two arrays: keys in sorted order, and values summed for
the corresponding keys. Differs from the sumMap function in that it does summation
with overflow.'
sidebar_position: 199
slug: /sql-reference/aggregate-functions/reference/summapwithoverflow
title: 'sumMapWithOverflow'
doc_type: 'reference'
sumMapWithOverflow
Totals a
value
array according to the keys specified in the
key
array. Returns a tuple of two arrays: keys in sorted order, and values summed for the corresponding keys.
It differs from the
sumMap
function in that it does summation with overflow - i.e. returns the same data type for the summation as the argument data type.
Syntax
sumMapWithOverflow(key <Array>, value <Array>)
Array type
.
sumMapWithOverflow(Tuple(key <Array>, value <Array>))
Tuple type
.
Arguments
key
:
Array
of keys.
value
:
Array
of values.
Passing a tuple of key and value arrays is a synonym to passing separately an array of keys and an array of values.
:::note
The number of elements in
key
and
value
must be the same for each row that is totaled.
:::
Returned Value
Returns a tuple of two arrays: keys in sorted order, and values ββsummed for the corresponding keys.
Example
First we create a table called
sum_map
, and insert some data into it. Arrays of keys and values are stored separately as a column called
statusMap
of
Nested
type, and together as a column called
statusMapTuple
of
tuple
type to illustrate the use of the two different syntaxes of this function described above.
Query:
sql
CREATE TABLE sum_map(
date Date,
timeslot DateTime,
statusMap Nested(
status UInt8,
requests UInt8
),
statusMapTuple Tuple(Array(Int8), Array(Int8))
) ENGINE = Log;
sql
INSERT INTO sum_map VALUES
('2000-01-01', '2000-01-01 00:00:00', [1, 2, 3], [10, 10, 10], ([1, 2, 3], [10, 10, 10])),
('2000-01-01', '2000-01-01 00:00:00', [3, 4, 5], [10, 10, 10], ([3, 4, 5], [10, 10, 10])),
('2000-01-01', '2000-01-01 00:01:00', [4, 5, 6], [10, 10, 10], ([4, 5, 6], [10, 10, 10])),
('2000-01-01', '2000-01-01 00:01:00', [6, 7, 8], [10, 10, 10], ([6, 7, 8], [10, 10, 10]));
If we query the table using the
sumMap
,
sumMapWithOverflow
with the array type syntax, and
toTypeName
functions then we can see that
for the
sumMapWithOverflow
function, the data type of the summed values array is the same as the argument type, both
UInt8
(i.e. summation was done with overflow). For
sumMap
the data type of the summed values arrays has changed from
UInt8
to
UInt64
such that overflow does not occur.
Query:
sql
SELECT
timeslot,
toTypeName(sumMap(statusMap.status, statusMap.requests)),
toTypeName(sumMapWithOverflow(statusMap.status, statusMap.requests)),
FROM sum_map
GROUP BY timeslot | {"source_file": "summapwithoverflow.md"} | [
-0.0018296422204002738,
0.00021040407591499388,
0.024798814207315445,
0.0031777876429259777,
-0.07061644643545151,
-0.03534148260951042,
0.07047729939222336,
0.021827543154358864,
-0.032444413751363754,
-0.0012566209770739079,
-0.06240085884928703,
0.01283289398998022,
0.024195363745093346,
... |
f31cfb9c-7dc1-4cd9-b4d6-bc7e8b507d95 | sql
SELECT
timeslot,
toTypeName(sumMap(statusMap.status, statusMap.requests)),
toTypeName(sumMapWithOverflow(statusMap.status, statusMap.requests)),
FROM sum_map
GROUP BY timeslot
Equivalently we could have used the tuple syntax with for the same result.
sql
SELECT
timeslot,
toTypeName(sumMap(statusMapTuple)),
toTypeName(sumMapWithOverflow(statusMapTuple)),
FROM sum_map
GROUP BY timeslot
Result:
text
βββββββββββββtimeslotββ¬βtoTypeName(sumMap(statusMap.status, statusMap.requests))ββ¬βtoTypeName(sumMapWithOverflow(statusMap.status, statusMap.requests))ββ
1. β 2000-01-01 00:01:00 β Tuple(Array(UInt8), Array(UInt64)) β Tuple(Array(UInt8), Array(UInt8)) β
2. β 2000-01-01 00:00:00 β Tuple(Array(UInt8), Array(UInt64)) β Tuple(Array(UInt8), Array(UInt8)) β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also
sumMap | {"source_file": "summapwithoverflow.md"} | [
0.03047824278473854,
-0.01783219538629055,
0.03525353595614433,
0.022366994991898537,
-0.07979200780391693,
-0.010158943012356758,
0.07352746278047562,
-0.0998571515083313,
-0.09019022434949875,
0.03399953618645668,
-0.00045960923307575285,
-0.06771966069936752,
-0.004350440111011267,
-0.0... |
3a35ef14-d935-4dc5-a0c6-392b94f93064 | description: 'Calculates the approximate number of different values of the argument.'
sidebar_position: 204
slug: /sql-reference/aggregate-functions/reference/uniq
title: 'uniq'
doc_type: 'reference'
uniq
Calculates the approximate number of different values of the argument.
sql
uniq(x[, ...])
Arguments
The function takes a variable number of parameters. Parameters can be
Tuple
,
Array
,
Date
,
DateTime
,
String
, or numeric types.
Returned value
A
UInt64
-type number.
Implementation details
Function:
Calculates a hash for all parameters in the aggregate, then uses it in calculations.
Uses an adaptive sampling algorithm. For the calculation state, the function uses a sample of element hash values up to 65536. This algorithm is very accurate and very efficient on the CPU. When the query contains several of these functions, using
uniq
is almost as fast as using other aggregate functions.
Provides the result deterministically (it does not depend on the query processing order).
We recommend using this function in almost all scenarios.
See Also
uniqCombined
uniqCombined64
uniqHLL12
uniqExact
uniqTheta | {"source_file": "uniq.md"} | [
-0.0666726678609848,
0.004460275638848543,
-0.07389054447412491,
0.03570833057165146,
-0.06063666567206383,
-0.027517110109329224,
0.05333062261343002,
0.026339152827858925,
0.029637344181537628,
-0.044777221977710724,
-0.007343275938183069,
0.045155975967645645,
0.10389251261949539,
-0.11... |
49de43fa-c149-45e5-bcc8-d317b81dbf3f | description: 'Computes an approximate quantile of a numeric data sequence using the
t-digest algorithm.'
sidebar_position: 178
slug: /sql-reference/aggregate-functions/reference/quantiletdigest
title: 'quantileTDigest'
doc_type: 'reference'
quantileTDigest
Computes an approximate
quantile
of a numeric data sequence using the
t-digest
algorithm.
Memory consumption is
log(n)
, where
n
is a number of values. The result depends on the order of running the query, and is nondeterministic.
The performance of the function is lower than performance of
quantile
or
quantileTiming
. In terms of the ratio of State size to precision, this function is much better than
quantile
.
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
Syntax
sql
quantileTDigest(level)(expr)
Alias:
medianTDigest
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
Returned value
Approximate quantile of the specified level.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Query:
sql
SELECT quantileTDigest(number) FROM numbers(10)
Result:
text
ββquantileTDigest(number)ββ
β 4.5 β
βββββββββββββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantiletdigest.md"} | [
-0.05255325883626938,
0.014836085960268974,
-0.010368870571255684,
-0.009831885807216167,
-0.084251768887043,
-0.10813169181346893,
0.035239141434431076,
0.11603088676929474,
0.016706150025129318,
-0.0035225562751293182,
-0.00039875705260783434,
-0.03319833055138588,
0.024723587557673454,
... |
4e1109d1-bc4c-4baf-ab10-8d12b77568be | description: 'Calculates the moving average of input values.'
sidebar_position: 144
slug: /sql-reference/aggregate-functions/reference/grouparraymovingavg
title: 'groupArrayMovingAvg'
doc_type: 'reference'
groupArrayMovingAvg
Calculates the moving average of input values.
sql
groupArrayMovingAvg(numbers_for_summing)
groupArrayMovingAvg(window_size)(numbers_for_summing)
The function can take the window size as a parameter. If left unspecified, the function takes the window size equal to the number of rows in the column.
Arguments
numbers_for_summing
β
Expression
resulting in a numeric data type value.
window_size
β Size of the calculation window.
Returned values
Array of the same size and type as the input data.
The function uses
rounding towards zero
. It truncates the decimal places insignificant for the resulting data type.
Example
The sample table
b
:
sql
CREATE TABLE t
(
`int` UInt8,
`float` Float32,
`dec` Decimal32(2)
)
ENGINE = TinyLog
text
ββintββ¬βfloatββ¬ββdecββ
β 1 β 1.1 β 1.10 β
β 2 β 2.2 β 2.20 β
β 4 β 4.4 β 4.40 β
β 7 β 7.77 β 7.77 β
βββββββ΄ββββββββ΄βββββββ
The queries:
sql
SELECT
groupArrayMovingAvg(int) AS I,
groupArrayMovingAvg(float) AS F,
groupArrayMovingAvg(dec) AS D
FROM t
text
ββIββββββββββ¬βFββββββββββββββββββββββββββββββββββββ¬βDββββββββββββββββββββββ
β [0,0,1,3] β [0.275,0.82500005,1.9250001,3.8675] β [0.27,0.82,1.92,3.86] β
βββββββββββββ΄ββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββ
sql
SELECT
groupArrayMovingAvg(2)(int) AS I,
groupArrayMovingAvg(2)(float) AS F,
groupArrayMovingAvg(2)(dec) AS D
FROM t
text
ββIββββββββββ¬βFβββββββββββββββββββββββββββββββββ¬βDββββββββββββββββββββββ
β [0,1,3,5] β [0.55,1.6500001,3.3000002,6.085] β [0.55,1.65,3.30,6.08] β
βββββββββββββ΄βββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββ | {"source_file": "grouparraymovingavg.md"} | [
-0.004424276761710644,
0.01957986317574978,
-0.06958813965320587,
0.03342681750655174,
-0.05229910463094711,
-0.04252324253320694,
0.05296843871474266,
0.046260178089141846,
-0.02540455013513565,
-0.010967452079057693,
-0.042620521038770676,
-0.022932125255465508,
0.01988142915070057,
-0.0... |
17d9e38b-6780-430c-beaa-2b0dee64a44d | description: 'Computes a rank correlation coefficient.'
sidebar_position: 182
slug: /sql-reference/aggregate-functions/reference/rankCorr
title: 'rankCorr'
doc_type: 'reference'
rankCorr
Computes a rank correlation coefficient.
Syntax
sql
rankCorr(x, y)
Arguments
x
β Arbitrary value.
Float32
or
Float64
.
y
β Arbitrary value.
Float32
or
Float64
.
Returned value(s)
Returns a rank correlation coefficient of the ranks of x and y. The value of the correlation coefficient ranges from -1 to +1. If less than two arguments are passed, the function will return an exception. The value close to +1 denotes a high linear relationship, and with an increase of one random variable, the second random variable also increases. The value close to -1 denotes a high linear relationship, and with an increase of one random variable, the second random variable decreases. The value close or equal to 0 denotes no relationship between the two random variables.
Type:
Float64
.
Example
Query:
sql
SELECT rankCorr(number, number) FROM numbers(100);
Result:
text
ββrankCorr(number, number)ββ
β 1 β
ββββββββββββββββββββββββββββ
Query:
sql
SELECT roundBankers(rankCorr(exp(number), sin(number)), 3) FROM numbers(100);
Result:
text
ββroundBankers(rankCorr(exp(number), sin(number)), 3)ββ
β -0.037 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also
Spearman's rank correlation coefficient | {"source_file": "rankCorr.md"} | [
-0.05101817473769188,
-0.06200612336397171,
-0.029432618990540504,
0.03974935784935951,
-0.03486647456884384,
0.023452799767255783,
0.03994444012641907,
0.036085668951272964,
0.01688714697957039,
0.02198849804699421,
0.04433101788163185,
0.009340557269752026,
0.06250312179327011,
0.0113890... |
65b1510b-4018-4147-acee-7e614c536ee4 | description: 'Similar to covarSamp but works slower while providing a lower computational
error.'
sidebar_position: 126
slug: /sql-reference/aggregate-functions/reference/covarsampstable
title: 'covarSampStable'
doc_type: 'reference'
covarSampStable
Calculates the value of
Ξ£((x - xΜ
)(y - yΜ
)) / (n - 1)
. Similar to
covarSamp
but works slower while providing a lower computational error.
Syntax
sql
covarSampStable(x, y)
Arguments
x
β first variable.
(U)Int*
,
Float*
,
Decimal
.
y
β second variable.
(U)Int*
,
Float*
,
Decimal
.
Returned Value
The sample covariance between
x
and
y
. For
n <= 1
,
inf
is returned.
Float64
.
Example
Query:
sql
DROP TABLE IF EXISTS series;
CREATE TABLE series(i UInt32, x_value Float64, y_value Float64) ENGINE = Memory;
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6,-4.4),(2, -9.6,3),(3, -1.3,-4),(4, 5.3,9.7),(5, 4.4,0.037),(6, -8.6,-7.8),(7, 5.1,9.3),(8, 7.9,-3.6),(9, -8.2,0.62),(10, -3,7.3);
sql
SELECT covarSampStable(x_value, y_value)
FROM
(
SELECT
x_value,
y_value
FROM series
);
Result:
reference
ββcovarSampStable(x_value, y_value)ββ
β 7.206275555555556 β
βββββββββββββββββββββββββββββββββββββ
Query:
sql
SELECT covarSampStable(x_value, y_value)
FROM
(
SELECT
x_value,
y_value
FROM series LIMIT 1
);
Result:
reference
ββcovarSampStable(x_value, y_value)ββ
β inf β
βββββββββββββββββββββββββββββββββββββ | {"source_file": "covarsampstable.md"} | [
0.00010568768630037084,
-0.06979954987764359,
-0.006683858577162027,
-0.012715722434222698,
-0.06384526193141937,
-0.05705958232283592,
0.04638167470693588,
0.04325731471180916,
-0.045888595283031464,
0.03015219420194626,
0.03353958576917648,
-0.013716744258999825,
0.011844784952700138,
-0... |
78af6812-5190-47f3-a81b-eac5a9f5ab0f | description: 'Calculates the weighted arithmetic mean.'
sidebar_position: 113
slug: /sql-reference/aggregate-functions/reference/avgweighted
title: 'avgWeighted'
doc_type: 'reference'
avgWeighted
Calculates the
weighted arithmetic mean
.
Syntax
sql
avgWeighted(x, weight)
Arguments
x
β Values.
weight
β Weights of the values.
x
and
weight
must both be
Integer
or
floating-point
,
but may have different types.
Returned value
NaN
if all the weights are equal to 0 or the supplied weights parameter is empty.
Weighted mean otherwise.
Return type
is always
Float64
.
Example
Query:
sql
SELECT avgWeighted(x, w)
FROM VALUES('x Int8, w Int8', (4, 1), (1, 0), (10, 2))
Result:
text
ββavgWeighted(x, weight)ββ
β 8 β
ββββββββββββββββββββββββββ
Example
Query:
sql
SELECT avgWeighted(x, w)
FROM VALUES('x Int8, w Float64', (4, 1), (1, 0), (10, 2))
Result:
text
ββavgWeighted(x, weight)ββ
β 8 β
ββββββββββββββββββββββββββ
Example
Query:
sql
SELECT avgWeighted(x, w)
FROM VALUES('x Int8, w Int8', (0, 0), (1, 0), (10, 0))
Result:
text
ββavgWeighted(x, weight)ββ
β nan β
ββββββββββββββββββββββββββ
Example
Query:
sql
CREATE TABLE test (t UInt8) ENGINE = Memory;
SELECT avgWeighted(t) FROM test
Result:
text
ββavgWeighted(x, weight)ββ
β nan β
ββββββββββββββββββββββββββ | {"source_file": "avgweighted.md"} | [
-0.0028239639941602945,
-0.03108065202832222,
-0.00022978980268817395,
0.09195540100336075,
-0.06031503155827522,
-0.0744805559515953,
0.09318597614765167,
0.03367077559232712,
0.005705838091671467,
0.014225399121642113,
0.005774752702564001,
-0.059918973594903946,
0.011460015550255775,
-0... |
ddb63fb2-54ec-4cab-8184-632a82b037db | description: 'Aggregate function that re-samples time series data to the specified grid.'
sidebar_position: 226
slug: /sql-reference/aggregate-functions/reference/timeSeriesResampleToGridWithStaleness
title: 'timeSeriesResampleToGridWithStaleness'
doc_type: 'reference'
Aggregate function that takes time series data as pairs of timestamps and values and re-samples this data to a regular time grid described by start timestamp, end timestamp and step. For each point on the grid the most recent (within the specified time window) sample is chosen.
Alias:
timeSeriesLastToGrid
.
Parameters:
-
start timestamp
- specifies start of the grid
-
end timestamp
- specifies end of the grid
-
grid step
- specifies step of the grid in seconds
-
staleness window
- specified the maximum "staleness" of the most recent sample in seconds
Arguments:
-
timestamp
- timestamp of the sample
-
value
- value of the time series corresponding to the
timestamp
Return value:
time series values re-sampled to the specified grid as an
Array(Nullable(Float64))
. The returned array contains one value for each time grid point. The value is NULL if there is no sample for a particular grid point.
Example:
The following query re-samples time series data to the grid [90, 105, 120, 135, 150, 165, 180, 195, 210] by choosing the value no older than 30 sec for each point on the grid:
sql
WITH
-- NOTE: the gap between 140 and 190 is to show how values are filled for ts = 150, 165, 180 according to staleness window parameter
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values, -- array of values corresponding to timestamps above
90 AS start_ts, -- start of timestamp grid
90 + 120 AS end_ts, -- end of timestamp grid
15 AS step_seconds, -- step of timestamp grid
30 AS window_seconds -- "staleness" window
SELECT timeSeriesResampleToGridWithStaleness(start_ts, end_ts, step_seconds, window_seconds)(timestamp, value)
FROM
(
-- This subquery converts arrays of timestamps and values into rows of `timestamp`, `value`
SELECT
arrayJoin(arrayZip(timestamps, values)) AS ts_and_val,
ts_and_val.1 AS timestamp,
ts_and_val.2 AS value
);
Response:
response
ββtimeSeriesResaβ―stamp, value)ββ
1. β [NULL,NULL,1,3,4,4,NULL,5,8] β
ββββββββββββββββββββββββββββββββ
Also it is possible to pass multiple samples of timestamps and values as Arrays of equal size. The same query with array arguments:
sql
WITH
[110, 120, 130, 140, 190, 200, 210, 220, 230]::Array(DateTime) AS timestamps,
[1, 1, 3, 4, 5, 5, 8, 12, 13]::Array(Float32) AS values,
90 AS start_ts,
90 + 120 AS end_ts,
15 AS step_seconds,
30 AS window_seconds
SELECT timeSeriesResampleToGridWithStaleness(start_ts, end_ts, step_seconds, window_seconds)(timestamps, values); | {"source_file": "timeSeriesResampleToGridWithStaleness.md"} | [
-0.07486791908740997,
-0.009986698627471924,
-0.040537912398576736,
0.031103206798434258,
-0.016678282991051674,
-0.0005953009240329266,
0.008360915817320347,
0.07177188247442245,
0.04071813449263573,
-0.0063446927815675735,
-0.08629173040390015,
-0.03019174374639988,
-0.035907041281461716,
... |
34b6d191-3bd1-46d8-9966-dd6db50f917a | :::note
This function is experimental, enable it by setting
allow_experimental_ts_to_grid_aggregate_function=true
.
::: | {"source_file": "timeSeriesResampleToGridWithStaleness.md"} | [
-0.046173498034477234,
-0.03719428926706314,
-0.013469633646309376,
0.09186484664678574,
0.02236771211028099,
0.004247533623129129,
0.002288517076522112,
-0.07123781740665436,
0.0007586099090985954,
0.07126602530479431,
-0.00610651820898056,
-0.012566006742417812,
-0.005058068782091141,
-0... |
c768b952-601b-4b0e-89a9-c506c0fd6de4 | description: 'Computes the sample skewness of a sequence.'
sidebar_position: 186
slug: /sql-reference/aggregate-functions/reference/skewsamp
title: 'skewSamp'
doc_type: 'reference'
skewSamp
Computes the
sample skewness
of a sequence.
It represents an unbiased estimate of the skewness of a random variable if passed values form its sample.
sql
skewSamp(expr)
Arguments
expr
β
Expression
returning a number.
Returned value
The skewness of the given distribution. Type β
Float64
. If
n <= 1
(
n
is the size of the sample), then the function returns
nan
.
Example
sql
SELECT skewSamp(value) FROM series_with_value_column; | {"source_file": "skewsamp.md"} | [
-0.10652267932891846,
-0.030291646718978882,
-0.009926867671310902,
-0.020465761423110962,
0.04709508270025253,
-0.06420256197452545,
0.04644175246357918,
0.027466312050819397,
0.01931866817176342,
0.0006845244206488132,
0.0703074038028717,
0.009911044500768185,
-0.011282432824373245,
-0.1... |
d269558b-1568-4332-a5a9-9b8c5db3721e | description: 'Inserts a value into the array at the specified position.'
sidebar_position: 140
slug: /sql-reference/aggregate-functions/reference/grouparrayinsertat
title: 'groupArrayInsertAt'
doc_type: 'reference'
groupArrayInsertAt
Inserts a value into the array at the specified position.
Syntax
sql
groupArrayInsertAt(default_x, size)(x, pos)
If in one query several values are inserted into the same position, the function behaves in the following ways:
If a query is executed in a single thread, the first one of the inserted values is used.
If a query is executed in multiple threads, the resulting value is an undetermined one of the inserted values.
Arguments
x
β Value to be inserted.
Expression
resulting in one of the
supported data types
.
pos
β Position at which the specified element
x
is to be inserted. Index numbering in the array starts from zero.
UInt32
.
default_x
β Default value for substituting in empty positions. Optional parameter.
Expression
resulting in the data type configured for the
x
parameter. If
default_x
is not defined, the
default values
are used.
size
β Length of the resulting array. Optional parameter. When using this parameter, the default value
default_x
must be specified.
UInt32
.
Returned value
Array with inserted values.
Type:
Array
.
Example
Query:
sql
SELECT groupArrayInsertAt(toString(number), number * 2) FROM numbers(5);
Result:
text
ββgroupArrayInsertAt(toString(number), multiply(number, 2))ββ
β ['0','','1','','2','','3','','4'] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Query:
sql
SELECT groupArrayInsertAt('-')(toString(number), number * 2) FROM numbers(5);
Result:
text
ββgroupArrayInsertAt('-')(toString(number), multiply(number, 2))ββ
β ['0','-','1','-','2','-','3','-','4'] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Query:
sql
SELECT groupArrayInsertAt('-', 5)(toString(number), number * 2) FROM numbers(5);
Result:
text
ββgroupArrayInsertAt('-', 5)(toString(number), multiply(number, 2))ββ
β ['0','-','1','-','2'] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Multi-threaded insertion of elements into one position.
Query:
sql
SELECT groupArrayInsertAt(number, 0) FROM numbers_mt(10) SETTINGS max_block_size = 1;
As a result of this query you get random integer in the
[0,9]
range. For example:
text
ββgroupArrayInsertAt(number, 0)ββ
β [7] β
βββββββββββββββββββββββββββββββββ | {"source_file": "grouparrayinsertat.md"} | [
-0.018932195380330086,
-0.006863586604595184,
-0.05646134912967682,
0.09152929484844208,
-0.08127471059560776,
-0.00222892127931118,
0.09146568179130554,
-0.047611795365810394,
0.04029955342411995,
-0.048532042652368546,
0.04086776450276375,
0.008519834838807583,
0.014143552631139755,
-0.1... |
a5130576-167a-4fd6-bfee-c2b6e47fa1fc | description: 'Calculates Shannon entropy of for a column of values.'
sidebar_position: 131
slug: /sql-reference/aggregate-functions/reference/entropy
title: 'entropy'
doc_type: 'reference'
entropy
Calculates
Shannon entropy
for a column of values.
Syntax
sql
entropy(val)
Arguments
val
β Column of values of any type.
Returned value
Shannon entropy.
Type:
Float64
.
Example
Query:
``sql
CREATE TABLE entropy (
vals
UInt32,
strings` String) ENGINE = Memory;
INSERT INTO entropy VALUES (1, 'A'), (1, 'A'), (1,'A'), (1,'A'), (2,'B'), (2,'B'), (2,'C'), (2,'D');
SELECT entropy(vals), entropy(strings) FROM entropy;
```
Result:
text
ββentropy(vals)ββ¬βentropy(strings)ββ
β 1 β 1.75 β
βββββββββββββββββ΄βββββββββββββββββββ | {"source_file": "entropy.md"} | [
0.04198892414569855,
0.041128214448690414,
0.00780320493504405,
0.03718560189008713,
-0.0984521135687828,
0.0003099718305747956,
0.13453996181488037,
-0.033703602850437164,
0.03759463503956795,
0.03354273736476898,
0.025047020986676216,
-0.05110206827521324,
0.08089514821767807,
-0.0752205... |
89c3b653-c3e1-4608-a570-eafacb840453 | description: 'Calculates the approximate number of different argument values, using
the Theta Sketch Framework.'
sidebar_position: 209
slug: /sql-reference/aggregate-functions/reference/uniqthetasketch
title: 'uniqTheta'
doc_type: 'reference'
Calculates the approximate number of different argument values, using the
Theta Sketch Framework
.
sql
uniqTheta(x[, ...])
Arguments
The function takes a variable number of parameters. Parameters can be
Tuple
,
Array
,
Date
,
DateTime
,
String
, or numeric types.
Returned value
A
UInt64
-type number.
Implementation details
Function:
Calculates a hash for all parameters in the aggregate, then uses it in calculations.
Uses the
KMV
algorithm to approximate the number of different argument values.
4096(2^12) 64-bit sketch are used. The size of the state is about 41 KB.
The relative error is 3.125% (95% confidence), see the
relative error table
for detail.
See Also
uniq
uniqCombined
uniqCombined64
uniqHLL12
uniqExact | {"source_file": "uniqthetasketch.md"} | [
-0.03470906615257263,
-0.014293253421783447,
-0.07228899747133255,
-0.009026076644659042,
-0.06473476439714432,
-0.030058471485972404,
0.04871111735701561,
0.05228922516107559,
0.0028659028466790915,
-0.02150549739599228,
-0.0061537339352071285,
-0.0014070437755435705,
0.15216703712940216,
... |
4cf4dc0a-49f7-4560-bd75-8beb0b4bf9cb | description: 'Computes an approximate quantile of a numeric data sequence.'
sidebar_position: 172
slug: /sql-reference/aggregate-functions/reference/quantiledeterministic
title: 'quantileDeterministic'
doc_type: 'reference'
quantileDeterministic
Computes an approximate
quantile
of a numeric data sequence.
This function applies
reservoir sampling
with a reservoir size up to 8192 and deterministic algorithm of sampling. The result is deterministic. To get an exact quantile, use the
quantileExact
function.
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
Syntax
sql
quantileDeterministic(level)(expr, determinator)
Alias:
medianDeterministic
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
determinator
β Number whose hash is used instead of a random number generator in the reservoir sampling algorithm to make the result of sampling deterministic. As a determinator you can use any deterministic positive number, for example, a user id or an event id. If the same determinator value occurs too often, the function works incorrectly.
Returned value
Approximate quantile of the specified level.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Input table:
text
ββvalββ
β 1 β
β 1 β
β 2 β
β 3 β
βββββββ
Query:
sql
SELECT quantileDeterministic(val, 1) FROM t
Result:
text
ββquantileDeterministic(val, 1)ββ
β 1.5 β
βββββββββββββββββββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantiledeterministic.md"} | [
-0.0309702567756176,
-0.02061052992939949,
0.04910745471715927,
-0.000674158101901412,
-0.08365076780319214,
-0.09209772944450378,
0.026189249008893967,
0.04808471351861954,
-0.010998417623341084,
-0.022258533164858818,
-0.06323814392089844,
-0.1165226548910141,
0.01024006400257349,
-0.041... |
6e35ee2c-cd19-4aad-9714-b066a5441f3e | description: 'Performs simple (unidimensional) linear regression.'
sidebar_position: 183
slug: /sql-reference/aggregate-functions/reference/simplelinearregression
title: 'simpleLinearRegression'
doc_type: 'reference'
simpleLinearRegression
Performs simple (unidimensional) linear regression.
sql
simpleLinearRegression(x, y)
Parameters:
x
β Column with explanatory variable values.
y
β Column with dependent variable values.
Returned values:
Constants
(k, b)
of the resulting line
y = k*x + b
.
Examples
sql
SELECT arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [0, 1, 2, 3])
text
ββarrayReduce('simpleLinearRegression', [0, 1, 2, 3], [0, 1, 2, 3])ββ
β (1,0) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
sql
SELECT arrayReduce('simpleLinearRegression', [0, 1, 2, 3], [3, 4, 5, 6])
text
ββarrayReduce('simpleLinearRegression', [0, 1, 2, 3], [3, 4, 5, 6])ββ
β (1,3) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "simplelinearregression.md"} | [
-0.029135607182979584,
-0.037165481597185135,
-0.02877875789999962,
0.07886607199907303,
-0.04156273975968361,
0.02597615122795105,
0.01592743583023548,
-0.05284174904227257,
-0.07030262798070908,
0.02166990004479885,
-0.01002969965338707,
0.017544671893119812,
-0.011000123806297779,
-0.07... |
03842623-8979-4601-9760-09cdc6571d0e | description: 'Calculates the population covariance'
sidebar_position: 121
slug: /sql-reference/aggregate-functions/reference/covarpop
title: 'covarPop'
doc_type: 'reference'
covarPop
Calculates the population covariance:
$$
\frac{\Sigma{(x - \bar{x})(y - \bar{y})}}{n}
$$
:::note
This function uses a numerically unstable algorithm. If you need
numerical stability
in calculations, use the
covarPopStable
function. It works slower but provides a lower computational error.
:::
Syntax
sql
covarPop(x, y)
Arguments
x
β first variable.
(U)Int*
,
Float*
,
Decimal
.
y
β second variable.
(U)Int*
,
Float*
,
Decimal
.
Returned Value
The population covariance between
x
and
y
.
Float64
.
Example
Query:
sql
DROP TABLE IF EXISTS series;
CREATE TABLE series(i UInt32, x_value Float64, y_value Float64) ENGINE = Memory;
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6, -4.4),(2, -9.6, 3),(3, -1.3, -4),(4, 5.3, 9.7),(5, 4.4, 0.037),(6, -8.6, -7.8),(7, 5.1, 9.3),(8, 7.9, -3.6),(9, -8.2, 0.62),(10, -3, 7.3);
sql
SELECT covarPop(x_value, y_value)
FROM series;
Result:
reference
ββcovarPop(x_value, y_value)ββ
β 6.485648 β
ββββββββββββββββββββββββββββββ | {"source_file": "covarpop.md"} | [
-0.017206702381372452,
-0.08189211040735245,
-0.00754382461309433,
0.0022792667150497437,
-0.07707139849662781,
-0.07287776470184326,
0.07390421628952026,
0.02951580472290516,
-0.030411552637815475,
0.0033413611818104982,
0.07946125417947769,
-0.047679074108600616,
0.0048130713403224945,
-... |
b043c6da-8115-48fc-97b4-51a787dcf771 | description: 'Calculates the XOR of a bitmap column, and returns the cardinality of
type UInt64, if used with suffix -State, then it returns a bitmap object'
sidebar_position: 151
slug: /sql-reference/aggregate-functions/reference/groupbitmapxor
title: 'groupBitmapXor'
doc_type: 'reference'
groupBitmapXor
groupBitmapXor
calculates the XOR of a bitmap column, and returns the cardinality of type UInt64, if used with suffix -State, then it returns a
bitmap object
.
sql
groupBitmapXor(expr)
Arguments
expr
β An expression that results in
AggregateFunction(groupBitmap, UInt*)
type.
Returned value
Value of the
UInt64
type.
Example
```sql
DROP TABLE IF EXISTS bitmap_column_expr_test2;
CREATE TABLE bitmap_column_expr_test2
(
tag_id String,
z AggregateFunction(groupBitmap, UInt32)
)
ENGINE = MergeTree
ORDER BY tag_id;
INSERT INTO bitmap_column_expr_test2 VALUES ('tag1', bitmapBuild(cast([1,2,3,4,5,6,7,8,9,10] AS Array(UInt32))));
INSERT INTO bitmap_column_expr_test2 VALUES ('tag2', bitmapBuild(cast([6,7,8,9,10,11,12,13,14,15] AS Array(UInt32))));
INSERT INTO bitmap_column_expr_test2 VALUES ('tag3', bitmapBuild(cast([2,4,6,8,10,12] AS Array(UInt32))));
SELECT groupBitmapXor(z) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%');
ββgroupBitmapXor(z)ββ
β 10 β
βββββββββββββββββββββ
SELECT arraySort(bitmapToArray(groupBitmapXorState(z))) FROM bitmap_column_expr_test2 WHERE like(tag_id, 'tag%');
ββarraySort(bitmapToArray(groupBitmapXorState(z)))ββ
β [1,3,5,6,8,10,11,13,14,15] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | {"source_file": "groupbitmapxor.md"} | [
-0.0014874173793941736,
0.044350042939186096,
-0.03609488904476166,
0.04166203364729881,
-0.03623460605740547,
-0.05103301256895065,
0.039912860840559006,
0.028544142842292786,
-0.058479707688093185,
-0.02601018361747265,
0.047221455723047256,
-0.09795960038900375,
0.08570125699043274,
-0.... |
5c1a6c75-abbf-40a0-b1bf-38e4575f4200 | description: 'Calculates the maximum from
value
array according to the keys specified
in the
key
array.'
sidebar_position: 165
slug: /sql-reference/aggregate-functions/reference/maxmap
title: 'maxMap'
doc_type: 'reference'
maxMap
Calculates the maximum from
value
array according to the keys specified in the
key
array.
Syntax
sql
maxMap(key, value)
or
sql
maxMap(Tuple(key, value))
Alias:
maxMappedArrays
:::note
- Passing a tuple of keys and value arrays is identical to passing two arrays of keys and values.
- The number of elements in
key
and
value
must be the same for each row that is totaled.
:::
Parameters
key
β Array of keys.
Array
.
value
β Array of values.
Array
.
Returned value
Returns a tuple of two arrays: keys in sorted order, and values calculated for the corresponding keys.
Tuple
(
Array
,
Array
).
Example
Query:
sql
SELECT maxMap(a, b)
FROM VALUES('a Array(Char), b Array(Int64)', (['x', 'y'], [2, 2]), (['y', 'z'], [3, 1]))
Result:
text
ββmaxMap(a, b)ββββββββββββ
β [['x','y','z'],[2,3,1]]β
ββββββββββββββββββββββββββ | {"source_file": "maxmap.md"} | [
0.09021760523319244,
0.014441058039665222,
-0.006237625610083342,
-0.06731818616390228,
-0.10867582261562347,
-0.025014914572238922,
0.04874660074710846,
0.059447623789310455,
-0.09928740561008453,
0.04415220022201538,
-0.021022524684667587,
0.040460746735334396,
0.09683068841695786,
-0.06... |
ed388a19-e819-45ba-9807-1cb818063968 | description: 'Returns the population variance. Unlike varPop , this function uses
a numerically stable algorithm. It works slower but provides a lower computational
error.'
sidebar_position: 211
slug: /sql-reference/aggregate-functions/reference/varpopstable
title: 'varPopStable'
doc_type: 'reference'
varPopStable {#varpopstable}
Returns the population variance. Unlike
varPop
, this function uses a
numerically stable
algorithm. It works slower but provides a lower computational error.
Syntax
sql
varPopStable(x)
Alias:
VAR_POP_STABLE
.
Parameters
x
: Population of values to find the population variance of.
(U)Int*
,
Float*
,
Decimal*
.
Returned value
Returns the population variance of
x
.
Float64
.
Example
Query:
```sql
DROP TABLE IF EXISTS test_data;
CREATE TABLE test_data
(
x UInt8,
)
ENGINE = Memory;
INSERT INTO test_data VALUES (3),(3),(3),(4),(4),(5),(5),(7),(11),(15);
SELECT
varPopStable(x) AS var_pop_stable
FROM test_data;
```
Result:
response
ββvar_pop_stableββ
β 14.4 β
ββββββββββββββββββ | {"source_file": "varpopstable.md"} | [
-0.0025098170153796673,
-0.03774772584438324,
0.005326446145772934,
0.07948481291532516,
-0.03907833620905876,
-0.07179359346628189,
0.05217758193612099,
0.07384464144706726,
-0.001704421709291637,
-0.018329916521906853,
0.06745720654726028,
0.015095160342752934,
0.04717518016695976,
-0.08... |
e44cf74c-e8f9-4622-8b51-f687545733ac | description: 'Calculates the arithmetic mean.'
sidebar_position: 112
slug: /sql-reference/aggregate-functions/reference/avg
title: 'avg'
doc_type: 'reference'
avg
Calculates the arithmetic mean.
Syntax
sql
avg(x)
Arguments
x
β input values, must be
Integer
,
Float
, or
Decimal
.
Returned value
The arithmetic mean, always as
Float64
.
NaN
if the input parameter
x
is empty.
Example
Query:
sql
SELECT avg(x) FROM VALUES('x Int8', 0, 1, 2, 3, 4, 5);
Result:
text
ββavg(x)ββ
β 2.5 β
ββββββββββ
Example
Create a temp table:
Query:
sql
CREATE TABLE test (t UInt8) ENGINE = Memory;
Get the arithmetic mean:
Query:
sql
SELECT avg(t) FROM test;
Result:
text
ββavg(x)ββ
β nan β
ββββββββββ | {"source_file": "avg.md"} | [
0.0006860243738628924,
0.009597985073924065,
-0.008682969957590103,
0.07224700599908829,
-0.0898183137178421,
-0.052437763661146164,
0.050021860748529434,
0.09803274273872375,
0.03162221983075142,
0.054216235876083374,
0.027990145608782768,
-0.04788484796881676,
0.06862752884626389,
-0.109... |
d2414721-cb4d-42da-89e0-6e2b86bb7dd4 | description: 'Computes the kurtosis of a sequence.'
sidebar_position: 157
slug: /sql-reference/aggregate-functions/reference/kurtpop
title: 'kurtPop'
doc_type: 'reference'
kurtPop
Computes the
kurtosis
of a sequence.
sql
kurtPop(expr)
Arguments
expr
β
Expression
returning a number.
Returned value
The kurtosis of the given distribution. Type β
Float64
Example
sql
SELECT kurtPop(value) FROM series_with_value_column; | {"source_file": "kurtpop.md"} | [
-0.0645073801279068,
-0.012753596529364586,
-0.033952873200178146,
0.026742901653051376,
-0.09919506311416626,
-0.04235890135169029,
0.054795876145362854,
0.06309543550014496,
0.07710353285074234,
0.013927269726991653,
0.07516202330589294,
-0.047868020832538605,
-0.0026049481239169836,
-0.... |
77da8525-b3e0-4f5b-9d0a-3cd513993cf2 | description: 'This function can be used for the purpose of testing exception safety.
It will throw an exception on creation with the specified probability.'
sidebar_position: 101
slug: /sql-reference/aggregate-functions/reference/aggthrow
title: 'aggThrow'
doc_type: 'reference'
aggThrow
This function can be used for the purpose of testing exception safety. It will throw an exception on creation with the specified probability.
Syntax
sql
aggThrow(throw_prob)
Arguments
throw_prob
β Probability to throw on creation.
Float64
.
Returned value
An exception:
Code: 503. DB::Exception: Aggregate function aggThrow has thrown exception successfully
.
Example
Query:
sql
SELECT number % 2 AS even, aggThrow(number) FROM numbers(10) GROUP BY even;
Result:
response
Received exception:
Code: 503. DB::Exception: Aggregate function aggThrow has thrown exception successfully: While executing AggregatingTransform. (AGGREGATE_FUNCTION_THROW) | {"source_file": "aggthrow.md"} | [
-0.05209071561694145,
0.0054486398585140705,
0.0004548417346086353,
0.07309199869632721,
-0.0203249491751194,
0.005957181099802256,
0.02509421296417713,
0.08684585988521576,
-0.014445343986153603,
-0.00836437102407217,
0.014585204422473907,
-0.06236031651496887,
0.05450182780623436,
-0.123... |
293792c3-3611-46dd-a36d-b49a1a76cbb6 | description: 'Calculates the
arg
value for a minimum
val
value. If there are multiple
rows with equal
val
being the maximum, which of the associated
arg
is returned
is not deterministic.'
sidebar_position: 110
slug: /sql-reference/aggregate-functions/reference/argmin
title: 'argMin'
doc_type: 'reference'
argMin
Calculates the
arg
value for a minimum
val
value. If there are multiple rows with equal
val
being the maximum, which of the associated
arg
is returned is not deterministic.
Both parts the
arg
and the
min
behave as
aggregate functions
, they both
skip
Null
during processing and return not
Null
values if not
Null
values are available.
Syntax
sql
argMin(arg, val)
Arguments
arg
β Argument.
val
β Value.
Returned value
arg
value that corresponds to minimum
val
value.
Type: matches
arg
type.
Example
Input table:
text
ββuserββββββ¬βsalaryββ
β director β 5000 β
β manager β 3000 β
β worker β 1000 β
ββββββββββββ΄βββββββββ
Query:
sql
SELECT argMin(user, salary) FROM salary
Result:
text
ββargMin(user, salary)ββ
β worker β
ββββββββββββββββββββββββ
Extended example
```sql
CREATE TABLE test
(
a Nullable(String),
b Nullable(Int64)
)
ENGINE = Memory AS
SELECT *
FROM VALUES((NULL, 0), ('a', 1), ('b', 2), ('c', 2), (NULL, NULL), ('d', NULL));
SELECT * FROM test;
ββaβββββ¬ββββbββ
β α΄Ία΅α΄Έα΄Έ β 0 β
β a β 1 β
β b β 2 β
β c β 2 β
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β
β d β α΄Ία΅α΄Έα΄Έ β
ββββββββ΄βββββββ
SELECT argMin(a, b), min(b) FROM test;
ββargMin(a, b)ββ¬βmin(b)ββ
β a β 0 β -- argMin = a because it the first not
NULL
value, min(b) is from another row!
ββββββββββββββββ΄βββββββββ
SELECT argMin(tuple(a), b) FROM test;
ββargMin(tuple(a), b)ββ
β (NULL) β -- The a
Tuple
that contains only a
NULL
value is not
NULL
, so the aggregate functions won't skip that row because of that
NULL
value
βββββββββββββββββββββββ
SELECT (argMin((a, b), b) as t).1 argMinA, t.2 argMinB from test;
ββargMinAββ¬βargMinBββ
β α΄Ία΅α΄Έα΄Έ β 0 β -- you can use
Tuple
and get both (all - tuple(*)) columns for the according max(b)
βββββββββββ΄ββββββββββ
SELECT argMin(a, b), min(b) FROM test WHERE a IS NULL and b IS NULL;
ββargMin(a, b)ββ¬βmin(b)ββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β -- All aggregated rows contains at least one
NULL
value because of the filter, so all rows are skipped, therefore the result will be
NULL
ββββββββββββββββ΄βββββββββ
SELECT argMin(a, (b, a)), min(tuple(b, a)) FROM test;
ββargMin(a, tuple(b, a))ββ¬βmin(tuple(b, a))ββ
β d β (NULL,NULL) β -- 'd' is the first not
NULL
value for the min
ββββββββββββββββββββββββββ΄βββββββββββββββββββ | {"source_file": "argmin.md"} | [
-0.03909727931022644,
0.006966263521462679,
-0.025862760841846466,
0.06404660642147064,
-0.06561582535505295,
-0.04306482896208763,
0.11066337674856186,
0.022054973989725113,
-0.04131723567843437,
0.005407819990068674,
0.012842991389334202,
-0.03618660569190979,
0.04322827607393265,
-0.051... |
115bec46-4c64-4417-8efb-436d14a45ae2 | SELECT argMin((a, b), (b, a)), min(tuple(b, a)) FROM test;
ββargMin(tuple(a, b), tuple(b, a))ββ¬βmin(tuple(b, a))ββ
β (NULL,NULL) β (NULL,NULL) β -- argMin returns (NULL,NULL) here because
Tuple
allows to don't skip
NULL
and min(tuple(b, a)) in this case is minimal value for this dataset
ββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββ
SELECT argMin(a, tuple(b)) FROM test;
ββargMin(a, tuple(b))ββ
β d β --
Tuple
can be used in
min
to not skip rows with
NULL
values as b.
βββββββββββββββββββββββ
```
See also
Tuple | {"source_file": "argmin.md"} | [
0.0030603648629039526,
-0.01146253477782011,
0.011551685631275177,
-0.020751086995005608,
0.002700711600482464,
0.010480329394340515,
0.04384110867977142,
-0.07746821641921997,
-0.03572716563940048,
0.0280738677829504,
0.08797314018011093,
-0.06882743537425995,
0.03336847946047783,
-0.0556... |
efc3b617-9961-4813-ba9e-aae57b429d05 | description: 'It is an alias for any but it was introduced for compatibility with
Window Functions, where sometimes it is necessary to process
NULL
values (by default
all ClickHouse aggregate functions ignore NULL values).'
sidebar_position: 137
slug: /sql-reference/aggregate-functions/reference/first_value
title: 'first_value'
doc_type: 'reference'
first_value
It is an alias for
any
but it was introduced for compatibility with
Window Functions
, where sometimes it's necessary to process
NULL
values (by default all ClickHouse aggregate functions ignore NULL values).
It supports declaring a modifier to respect nulls (
RESPECT NULLS
), both under
Window Functions
and in normal aggregations.
As with
any
, without Window Functions the result will be random if the source stream is not ordered and the return type
matches the input type (Null is only returned if the input is Nullable or -OrNull combinator is added).
examples {#examples}
```sql
CREATE TABLE test_data
(
a Int64,
b Nullable(Int64)
)
ENGINE = Memory;
INSERT INTO test_data (a, b) VALUES (1,null), (2,3), (4, 5), (6,null);
```
Example 1 {#example1}
By default, the NULL value is ignored.
sql
SELECT first_value(b) FROM test_data;
text
ββany(b)ββ
β 3 β
ββββββββββ
Example 2 {#example2}
The NULL value is ignored.
sql
SELECT first_value(b) ignore nulls FROM test_data
text
ββany(b) IGNORE NULLS ββ
β 3 β
ββββββββββββββββββββββββ
Example 3 {#example3}
The NULL value is accepted.
sql
SELECT first_value(b) respect nulls FROM test_data
text
ββany(b) RESPECT NULLS ββ
β α΄Ία΅α΄Έα΄Έ β
βββββββββββββββββββββββββ
Example 4 {#example4}
Stabilized result using the sub-query with
ORDER BY
.
sql
SELECT
first_value_respect_nulls(b),
first_value(b)
FROM
(
SELECT *
FROM test_data
ORDER BY a ASC
)
text
ββany_respect_nulls(b)ββ¬βany(b)ββ
β α΄Ία΅α΄Έα΄Έ β 3 β
ββββββββββββββββββββββββ΄βββββββββ | {"source_file": "first_value.md"} | [
-0.08669225871562958,
-0.027560845017433167,
-0.030430400744080544,
0.04575083404779434,
-0.04858876392245293,
0.00017916767683345824,
0.07342302799224854,
0.024447601288557053,
0.03696776181459427,
-0.007357880473136902,
0.05044794827699661,
-0.004297496750950813,
0.05386490374803543,
-0.... |
de0ce588-b40e-47c1-b5f6-e1728b8fecbb | description: 'Calculates the sum of the numbers with Kahan compensated summation algorithm'
sidebar_position: 197
slug: /sql-reference/aggregate-functions/reference/sumkahan
title: 'sumKahan'
doc_type: 'reference'
Calculates the sum of the numbers with
Kahan compensated summation algorithm
Slower than
sum
function.
The compensation works only for
Float
types.
Syntax
sql
sumKahan(x)
Arguments
x
β Input value, must be
Integer
,
Float
, or
Decimal
.
Returned value
the sum of numbers, with type
Integer
,
Float
, or
Decimal
depends on type of input arguments
Example
Query:
sql
SELECT sum(0.1), sumKahan(0.1) FROM numbers(10);
Result:
text
ββββββββββββsum(0.1)ββ¬βsumKahan(0.1)ββ
β 0.9999999999999999 β 1 β
ββββββββββββββββββββββ΄ββββββββββββββββ | {"source_file": "sumkahan.md"} | [
-0.06031803414225578,
0.05680539458990097,
-0.013672263361513615,
0.01406602468341589,
-0.05787331610918045,
-0.058128684759140015,
0.07776451855897903,
0.038417983800172806,
0.02626429870724678,
0.0032775187864899635,
-0.03710394725203514,
-0.07422414422035217,
0.0004799540329258889,
-0.0... |
5f78d9a4-3769-4ac0-8d84-f78894d4a68e | description: 'Counts the number of rows or not-NULL values.'
sidebar_position: 120
slug: /sql-reference/aggregate-functions/reference/count
title: 'count'
doc_type: 'reference'
count
Counts the number of rows or not-NULL values.
ClickHouse supports the following syntaxes for
count
:
count(expr)
or
COUNT(DISTINCT expr)
.
count()
or
COUNT(*)
. The
count()
syntax is ClickHouse-specific.
Arguments
The function can take:
Zero parameters.
One
expression
.
Returned value
If the function is called without parameters it counts the number of rows.
If the
expression
is passed, then the function counts how many times this expression returned not null. If the expression returns a
Nullable
-type value, then the result of
count
stays not
Nullable
. The function returns 0 if the expression returned
NULL
for all the rows.
In both cases the type of the returned value is
UInt64
.
Details
ClickHouse supports the
COUNT(DISTINCT ...)
syntax. The behavior of this construction depends on the
count_distinct_implementation
setting. It defines which of the
uniq*
functions is used to perform the operation. The default is the
uniqExact
function.
The
SELECT count() FROM table
query is optimized by default using metadata from MergeTree. If you need to use row-level security, disable optimization using the
optimize_trivial_count_query
setting.
However
SELECT count(nullable_column) FROM table
query can be optimized by enabling the
optimize_functions_to_subcolumns
setting. With
optimize_functions_to_subcolumns = 1
the function reads only
null
subcolumn instead of reading and processing the whole column data. The query
SELECT count(n) FROM table
transforms to
SELECT sum(NOT n.null) FROM table
.
Improving COUNT(DISTINCT expr) performance
If your
COUNT(DISTINCT expr)
query is slow, consider adding a
GROUP BY
clause as this improves parallelization. You can also use a
projection
to create an index on the target column used with
COUNT(DISTINCT target_col)
.
Examples
Example 1:
sql
SELECT count() FROM t
text
ββcount()ββ
β 5 β
βββββββββββ
Example 2:
sql
SELECT name, value FROM system.settings WHERE name = 'count_distinct_implementation'
text
ββnameβββββββββββββββββββββββββββ¬βvalueββββββ
β count_distinct_implementation β uniqExact β
βββββββββββββββββββββββββββββββββ΄ββββββββββββ
sql
SELECT count(DISTINCT num) FROM t
text
ββuniqExact(num)ββ
β 3 β
ββββββββββββββββββ
This example shows that
count(DISTINCT num)
is performed by the
uniqExact
function according to the
count_distinct_implementation
setting value. | {"source_file": "count.md"} | [
-0.031168257817626,
-0.05273471772670746,
-0.05986684560775757,
0.07597596198320389,
-0.08116015791893005,
0.0339977964758873,
0.03054528869688511,
0.007369261234998703,
0.04638582095503807,
-0.02306758612394333,
0.07957726716995239,
-0.03271019831299782,
0.10306506603956223,
-0.1054663062... |
5004bcbd-7fec-41d7-99aa-40a2da9748e4 | description: 'Adds the difference between consecutive rows. If the difference is negative,
it is ignored.'
sidebar_position: 130
slug: /sql-reference/aggregate-functions/reference/deltasumtimestamp
title: 'deltaSumTimestamp'
doc_type: 'reference'
Adds the difference between consecutive rows. If the difference is negative, it is ignored.
This function is primarily for
materialized views
that store data ordered by some time bucket-aligned timestamp, for example, a
toStartOfMinute
bucket. Because the rows in such a materialized view will all have the same timestamp, it is impossible for them to be merged in the correct order, without storing the original, unrounded timestamp value. The
deltaSumTimestamp
function keeps track of the original
timestamp
of the values it's seen, so the values (states) of the function are correctly computed during merging of parts.
To calculate the delta sum across an ordered collection you can simply use the
deltaSum
function.
Syntax
sql
deltaSumTimestamp(value, timestamp)
Arguments
value
β Input values, must be some
Integer
type or
Float
type or a
Date
or
DateTime
.
timestamp
β The parameter for order values, must be some
Integer
type or
Float
type or a
Date
or
DateTime
.
Returned value
Accumulated differences between consecutive values, ordered by the
timestamp
parameter.
Type:
Integer
or
Float
or
Date
or
DateTime
.
Example
Query:
sql
SELECT deltaSumTimestamp(value, timestamp)
FROM (SELECT number AS timestamp, [0, 4, 8, 3, 0, 0, 0, 1, 3, 5][number] AS value FROM numbers(1, 10));
Result:
text
ββdeltaSumTimestamp(value, timestamp)ββ
β 13 β
βββββββββββββββββββββββββββββββββββββββ | {"source_file": "deltasumtimestamp.md"} | [
-0.07006394118070602,
-0.033176444470882416,
0.04228277504444122,
-0.05782027170062065,
-0.07189157605171204,
0.01825757510960102,
0.03362981975078583,
0.033706072717905045,
0.05640097334980965,
-0.004283261485397816,
0.06925315409898758,
-0.04620407521724701,
-0.006382983177900314,
-0.048... |
c930f4af-54da-436b-99a3-c14ade687a4b | description: 'Applies the student t-test to samples from two populations.'
sidebar_label: 'studentTTest'
sidebar_position: 194
slug: /sql-reference/aggregate-functions/reference/studentttest
title: 'studentTTest'
doc_type: 'reference'
studentTTest
Applies Student's t-test to samples from two populations.
Syntax
sql
studentTTest([confidence_level])(sample_data, sample_index)
Values of both samples are in the
sample_data
column. If
sample_index
equals to 0 then the value in that row belongs to the sample from the first population. Otherwise it belongs to the sample from the second population.
The null hypothesis is that means of populations are equal. Normal distribution with equal variances is assumed.
Arguments
sample_data
β Sample data.
Integer
,
Float
or
Decimal
.
sample_index
β Sample index.
Integer
.
Parameters
confidence_level
β Confidence level in order to calculate confidence intervals.
Float
.
Returned values
Tuple
with two or four elements (if the optional
confidence_level
is specified):
calculated t-statistic.
Float64
.
calculated p-value.
Float64
.
[calculated confidence-interval-low.
Float64
.]
[calculated confidence-interval-high.
Float64
.]
Example
Input table:
text
ββsample_dataββ¬βsample_indexββ
β 20.3 β 0 β
β 21.1 β 0 β
β 21.9 β 1 β
β 21.7 β 0 β
β 19.9 β 1 β
β 21.8 β 1 β
βββββββββββββββ΄βββββββββββββββ
Query:
sql
SELECT studentTTest(sample_data, sample_index) FROM student_ttest;
Result:
text
ββstudentTTest(sample_data, sample_index)ββββ
β (-0.21739130434783777,0.8385421208415731) β
βββββββββββββββββββββββββββββββββββββββββββββ
See Also
Student's t-test
welchTTest function | {"source_file": "studentttest.md"} | [
0.0008394166943617165,
0.03863988071680069,
0.029517870396375656,
0.01858525350689888,
-0.01512069907039404,
-0.08700861036777496,
0.0112245362251997,
0.11348993331193924,
-0.06725890189409256,
0.035025645047426224,
0.1061883494257927,
-0.14328256249427795,
0.07135532051324844,
-0.09449298... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.