id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
6ab4b0a2-d920-48d4-8d8e-68bff8dc8898 | slug: /guides/sre/network-ports
sidebar_label: 'Network ports'
title: 'Network ports'
description: 'Description of available network ports and what they are used for'
doc_type: 'reference'
keywords: ['network', 'ports', 'configuration', 'security', 'firewall']
Network ports
:::note
Ports described as
default
mean that the port number is configured in
/etc/clickhouse-server/config.xml
. To customize your settings, add a file to
/etc/clickhouse-server/config.d/
. See the
configuration file
documentation.
:::
|Port|Description|Cloud|OSS|
|----|-----------|-----|---|
|2181|ZooKeeper default service port.
Note: see
9181
for ClickHouse Keeper
||β|
|8123|HTTP default port||β|
|8443|HTTP SSL/TLS default port|β|β|
|9000|Native Protocol port (also referred to as ClickHouse TCP protocol). Used by ClickHouse applications and processes like
clickhouse-server
,
clickhouse-client
, and native ClickHouse tools. Used for inter-server communication for distributed queries.||β|
|9004|MySQL emulation port||β|
|9005|PostgreSQL emulation port (also used for secure communication if SSL is enabled for ClickHouse).||β|
|9009|Inter-server communication port for low-level data access. Used for data exchange, replication, and inter-server communication.||β|
|9010|SSL/TLS for inter-server communications||β|
|9011|Native protocol PROXYv1 protocol port||β|
|9019|JDBC bridge||β|
|9100|gRPC port||β|
|9181|Recommended ClickHouse Keeper port||β|
|9234|Recommended ClickHouse Keeper Raft port (also used for secure communication if
<secure>1</secure>
enabled)||β|
|9363|Prometheus default metrics port||β|
|9281|Recommended Secure SSL ClickHouse Keeper port||β|
|9440|Native protocol SSL/TLS port|β|β|
|42000|Graphite default port||β| | {"source_file": "network-ports.md"} | [
-0.005691055208444595,
-0.0381222702562809,
-0.046426914632320404,
-0.07353614270687103,
-0.03845740854740143,
-0.004650451708585024,
-0.04727115109562874,
-0.04417048767209053,
-0.029453670606017113,
0.03822170943021774,
0.026042751967906952,
0.04925532266497612,
-0.02679388038814068,
0.0... |
c2d2387e-06b6-4dd8-9310-f00ffdc9bed3 | slug: /guides/sre/configuring-ssl
sidebar_label: 'Configuring SSL-TLS'
sidebar_position: 20
title: 'Configuring SSL-TLS'
description: 'This guide provides simple and minimal settings to configure ClickHouse to use OpenSSL certificates to validate connections.'
keywords: ['SSL configuration', 'TLS setup', 'OpenSSL certificates', 'secure connections', 'SRE guide']
doc_type: 'guide'
import SelfManaged from '@site/docs/_snippets/_self_managed_only_automated.md';
import configuringSsl01 from '@site/static/images/guides/sre/configuring-ssl_01.png';
import Image from '@theme/IdealImage';
Configuring SSL-TLS
This guide provides simple and minimal settings to configure ClickHouse to use OpenSSL certificates to validate connections. For this demonstration, a self-signed Certificate Authority (CA) certificate and key are created with node certificates to make the connections with appropriate settings.
:::note
TLS implementation is complex and there are many options to consider to ensure a fully secure and robust deployment. This is a basic tutorial with basic SSL/TLS configuration examples. Consult with your PKI/security team to generate the correct certificates for your organization.
Review this
basic tutorial on certificate usage
for an introductory overview.
:::
1. Create a ClickHouse Deployment {#1-create-a-clickhouse-deployment}
This guide was written using Ubuntu 20.04 and ClickHouse installed on the following hosts using the DEB package (using apt). The domain is
marsnet.local
:
|Host |IP Address|
|--------|-------------|
|
chnode1
|192.168.1.221|
|
chnode2
|192.168.1.222|
|
chnode3
|192.168.1.223|
:::note
View the
Quick Start
for more details on how to install ClickHouse.
:::
2. Create SSL certificates {#2-create-ssl-certificates}
:::note
Using self-signed certificates are for demonstration purposes only and should not used in production. Certificate requests should be created to be signed by the organization and validated using the CA chain that will be configured in the settings. However, these steps can be used to configure and test settings, then can be replaced by the actual certificates that will be used.
:::
Generate a key that will be used for the new CA:
bash
openssl genrsa -out marsnet_ca.key 2048
Generate a new self-signed CA certificate. The following will create a new certificate that will be used to sign other certificates using the CA key:
bash
openssl req -x509 -subj "/CN=marsnet.local CA" -nodes -key marsnet_ca.key -days 1095 -out marsnet_ca.crt
:::note
Backup the key and CA certificate in a secure location not in the cluster. After generating the node certificates, the key should be deleted from the cluster nodes.
:::
Verify the contents of the new CA certificate:
bash
openssl x509 -in marsnet_ca.crt -text | {"source_file": "configuring-ssl.md"} | [
-0.04086582735180855,
0.03717484325170517,
-0.0747305303812027,
0.03383291885256767,
0.013782263733446598,
-0.07879621535539627,
-0.014738719910383224,
-0.0032857449259608984,
-0.03971052169799805,
-0.05073505640029907,
0.05674717575311661,
-0.004576250910758972,
0.11607658863067627,
0.058... |
b85e088a-67ef-40ff-82c9-70be02119e00 | Verify the contents of the new CA certificate:
bash
openssl x509 -in marsnet_ca.crt -text
Create a certificate request (CSR) and generate a key for each node:
bash
openssl req -newkey rsa:2048 -nodes -subj "/CN=chnode1" -addext "subjectAltName = DNS:chnode1.marsnet.local,IP:192.168.1.221" -keyout chnode1.key -out chnode1.csr
openssl req -newkey rsa:2048 -nodes -subj "/CN=chnode2" -addext "subjectAltName = DNS:chnode2.marsnet.local,IP:192.168.1.222" -keyout chnode2.key -out chnode2.csr
openssl req -newkey rsa:2048 -nodes -subj "/CN=chnode3" -addext "subjectAltName = DNS:chnode3.marsnet.local,IP:192.168.1.223" -keyout chnode3.key -out chnode3.csr
Using the CSR and CA, create new certificate and key pairs:
bash
openssl x509 -req -in chnode1.csr -out chnode1.crt -CA marsnet_ca.crt -CAkey marsnet_ca.key -days 365 -copy_extensions copy
openssl x509 -req -in chnode2.csr -out chnode2.crt -CA marsnet_ca.crt -CAkey marsnet_ca.key -days 365 -copy_extensions copy
openssl x509 -req -in chnode3.csr -out chnode3.crt -CA marsnet_ca.crt -CAkey marsnet_ca.key -days 365 -copy_extensions copy
Verify the certs for subject and issuer:
bash
openssl x509 -in chnode1.crt -text -noout
Check that the new certificates verify against the CA cert:
bash
openssl verify -CAfile marsnet_ca.crt chnode1.crt
chnode1.crt: OK
3. Create and Configure a directory to store certificates and keys. {#3-create-and-configure-a-directory-to-store-certificates-and-keys}
:::note
This must be done on each node. Use appropriate certificates and keys on each host.
:::
Create a folder in a directory accessible by ClickHouse in each node. We recommend the default configuration directory (e.g.
/etc/clickhouse-server
):
bash
mkdir /etc/clickhouse-server/certs
Copy the CA certificate, node certificate and key corresponding to each node to the new certs directory.
Update owner and permissions to allow ClickHouse to read the certificates:
bash
chown clickhouse:clickhouse -R /etc/clickhouse-server/certs
chmod 600 /etc/clickhouse-server/certs/*
chmod 755 /etc/clickhouse-server/certs
ll /etc/clickhouse-server/certs
response
total 20
drw-r--r-- 2 clickhouse clickhouse 4096 Apr 12 20:23 ./
drwx------ 5 clickhouse clickhouse 4096 Apr 12 20:23 ../
-rw------- 1 clickhouse clickhouse 997 Apr 12 20:22 chnode1.crt
-rw------- 1 clickhouse clickhouse 1708 Apr 12 20:22 chnode1.key
-rw------- 1 clickhouse clickhouse 1131 Apr 12 20:23 marsnet_ca.crt
4. Configure the environment with basic clusters using ClickHouse Keeper {#4-configure-the-environment-with-basic-clusters-using-clickhouse-keeper}
For this deployment environment, the following ClickHouse Keeper settings are used in each node. Each server will have its own
<server_id>
. (For example,
<server_id>1</server_id>
for node
chnode1
, and so on.) | {"source_file": "configuring-ssl.md"} | [
0.022054441273212433,
-0.002206619828939438,
-0.0579804927110672,
-0.022218290716409683,
0.008558492176234722,
-0.03821389377117157,
-0.04312007129192352,
-0.015539812855422497,
0.04794074594974518,
0.035006940364837646,
0.015241310931742191,
-0.09848438203334808,
0.08564607053995132,
-0.0... |
6a7ea96e-f7c7-4f81-84dd-6e3512a006b4 | :::note
Recommended port is
9281
for ClickHouse Keeper. However, the port is configurable and can be set if this port is in use already by another application in the environment.
For a full explanation of all options, visit https://clickhouse.com/docs/operations/clickhouse-keeper/
:::
Add the following inside the
<clickhouse>
tag in ClickHouse server
config.xml
:::note
For production environments, it is recommended to use a separate
.xml
config file in the
config.d
directory.
For more information, visit https://clickhouse.com/docs/operations/configuration-files/
:::
```xml
9281
1
/var/lib/clickhouse/coordination/log
/var/lib/clickhouse/coordination/snapshots
<coordination_settings>
<operation_timeout_ms>10000</operation_timeout_ms>
<session_timeout_ms>30000</session_timeout_ms>
<raft_logs_level>trace</raft_logs_level>
</coordination_settings>
<raft_configuration>
<secure>true</secure>
<server>
<id>1</id>
<hostname>chnode1.marsnet.local</hostname>
<port>9444</port>
</server>
<server>
<id>2</id>
<hostname>chnode2.marsnet.local</hostname>
<port>9444</port>
</server>
<server>
<id>3</id>
<hostname>chnode3.marsnet.local</hostname>
<port>9444</port>
</server>
</raft_configuration>
```
Uncomment and update the keeper settings on all nodes and set the
<secure>
flag to 1:
xml
<zookeeper>
<node>
<host>chnode1.marsnet.local</host>
<port>9281</port>
<secure>1</secure>
</node>
<node>
<host>chnode2.marsnet.local</host>
<port>9281</port>
<secure>1</secure>
</node>
<node>
<host>chnode3.marsnet.local</host>
<port>9281</port>
<secure>1</secure>
</node>
</zookeeper>
Update and add the following cluster settings to
chnode1
and
chnode2
.
chnode3
will be used for the ClickHouse Keeper quorum.
:::note
For this configuration, only one example cluster is configured. The test sample clusters must be either removed, commented out or if an existing cluster exists that is being tested, then the port must be updated and the
<secure>
option must be added. The
<user
and
<password>
must be set if the
default
user was initially configured to have a password in the installation or in the
users.xml
file.
:::
The following creates a cluster with one shard replica on two servers (one on each node). | {"source_file": "configuring-ssl.md"} | [
-0.017494438216090202,
-0.012882236391305923,
-0.10594112426042557,
-0.06754334270954132,
-0.04018087312579155,
-0.07318447530269623,
-0.019093524664640427,
-0.04856501892209053,
-0.025961684063076973,
0.003015986643731594,
0.06455632299184799,
0.03204192966222763,
-0.007369540631771088,
0... |
99bd307e-6b17-48a2-b96d-f9cfc13f02ca | The following creates a cluster with one shard replica on two servers (one on each node).
xml
<remote_servers>
<cluster_1S_2R>
<shard>
<replica>
<host>chnode1.marsnet.local</host>
<port>9440</port>
<user>default</user>
<password>ClickHouse123!</password>
<secure>1</secure>
</replica>
<replica>
<host>chnode2.marsnet.local</host>
<port>9440</port>
<user>default</user>
<password>ClickHouse123!</password>
<secure>1</secure>
</replica>
</shard>
</cluster_1S_2R>
</remote_servers>
Define macros values to be able to create a ReplicatedMergeTree table for testing. On
chnode1
:
xml
<macros>
<shard>1</shard>
<replica>replica_1</replica>
</macros>
On
chnode2
:
xml
<macros>
<shard>1</shard>
<replica>replica_2</replica>
</macros>
5. Configure SSL-TLS interfaces on ClickHouse nodes {#5-configure-ssl-tls-interfaces-on-clickhouse-nodes}
The settings below are configured in the ClickHouse server
config.xml
Set the display name for the deployment (optional):
xml
<display_name>clickhouse</display_name>
Set ClickHouse to listen on external ports:
xml
<listen_host>0.0.0.0</listen_host>
Configure the
https
port and disable the
http
port on each node:
xml
<https_port>8443</https_port>
<!--<http_port>8123</http_port>-->
Configure the ClickHouse Native secure TCP port and disable the default non-secure port on each node:
xml
<tcp_port_secure>9440</tcp_port_secure>
<!--<tcp_port>9000</tcp_port>-->
Configure the
interserver https
port and disable the default non-secure port on each node:
xml
<interserver_https_port>9010</interserver_https_port>
<!--<interserver_http_port>9009</interserver_http_port>-->
Configure OpenSSL with certificates and paths
:::note
Each filename and path must be updated to match the node that it is being configured on.
For example, update the
<certificateFile>
entry to be
chnode2.crt
when configuring in
chnode2
host.
::: | {"source_file": "configuring-ssl.md"} | [
0.020462555810809135,
-0.038111019879579544,
-0.054939575493335724,
-0.016815241426229477,
-0.018375787883996964,
-0.06786736845970154,
-0.04830622673034668,
-0.05219409614801407,
-0.02716640569269657,
0.05304340273141861,
0.0553966760635376,
-0.07354271411895752,
0.1326245218515396,
-0.08... |
33693c95-9f3d-4c6a-8ebc-83f7cbf75539 | xml
<openSSL>
<server>
<certificateFile>/etc/clickhouse-server/certs/chnode1.crt</certificateFile>
<privateKeyFile>/etc/clickhouse-server/certs/chnode1.key</privateKeyFile>
<verificationMode>relaxed</verificationMode>
<caConfig>/etc/clickhouse-server/certs/marsnet_ca.crt</caConfig>
<cacheSessions>true</cacheSessions>
<disableProtocols>sslv2,sslv3</disableProtocols>
<preferServerCiphers>true</preferServerCiphers>
</server>
<client>
<loadDefaultCAFile>false</loadDefaultCAFile>
<caConfig>/etc/clickhouse-server/certs/marsnet_ca.crt</caConfig>
<cacheSessions>true</cacheSessions>
<disableProtocols>sslv2,sslv3</disableProtocols>
<preferServerCiphers>true</preferServerCiphers>
<verificationMode>relaxed</verificationMode>
<invalidCertificateHandler>
<name>RejectCertificateHandler</name>
</invalidCertificateHandler>
</client>
</openSSL>
For more information, visit https://clickhouse.com/docs/operations/server-configuration-parameters/settings/#server_configuration_parameters-openssl
Configure gRPC for SSL on every node:
xml
<grpc>
<enable_ssl>1</enable_ssl>
<ssl_cert_file>/etc/clickhouse-server/certs/chnode1.crt</ssl_cert_file>
<ssl_key_file>/etc/clickhouse-server/certs/chnode1.key</ssl_key_file>
<ssl_require_client_auth>true</ssl_require_client_auth>
<ssl_ca_cert_file>/etc/clickhouse-server/certs/marsnet_ca.crt</ssl_ca_cert_file>
<transport_compression_type>none</transport_compression_type>
<transport_compression_level>0</transport_compression_level>
<max_send_message_size>-1</max_send_message_size>
<max_receive_message_size>-1</max_receive_message_size>
<verbose_logs>false</verbose_logs>
</grpc>
For more information, visit https://clickhouse.com/docs/interfaces/grpc/
Configure ClickHouse client on at least one of the nodes to use SSL for connections in its own
config.xml
file (by default in
/etc/clickhouse-client/
):
xml
<openSSL>
<client>
<loadDefaultCAFile>false</loadDefaultCAFile>
<caConfig>/etc/clickhouse-server/certs/marsnet_ca.crt</caConfig>
<cacheSessions>true</cacheSessions>
<disableProtocols>sslv2,sslv3</disableProtocols>
<preferServerCiphers>true</preferServerCiphers>
<invalidCertificateHandler>
<name>RejectCertificateHandler</name>
</invalidCertificateHandler>
</client>
</openSSL>
Disable default emulation ports for MySQL and PostgreSQL:
xml
<!--mysql_port>9004</mysql_port-->
<!--postgresql_port>9005</postgresql_port-->
6. Testing {#6-testing}
Start all nodes, one at a time:
bash
service clickhouse-server start | {"source_file": "configuring-ssl.md"} | [
-0.01772184856235981,
0.04814593866467476,
-0.07503347843885422,
-0.04545004293322563,
0.026720503345131874,
-0.11565049737691879,
-0.058186911046504974,
-0.08930320292711258,
0.04781366139650345,
-0.015392754226922989,
0.07402794808149338,
-0.04450216516852379,
0.024630524218082428,
0.032... |
c5e73c1f-4bc5-4607-b1de-90eac6cd74b3 | 6. Testing {#6-testing}
Start all nodes, one at a time:
bash
service clickhouse-server start
Verify secure ports are up and listening, should look similar to this example on each node:
bash
root@chnode1:/etc/clickhouse-server# netstat -ano | grep tcp
response
tcp 0 0 0.0.0.0:9010 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 0.0.0.0:9440 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 0.0.0.0:9281 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 192.168.1.221:33046 192.168.1.222:9444 ESTABLISHED off (0.00/0/0)
tcp 0 0 192.168.1.221:42730 192.168.1.223:9444 ESTABLISHED off (0.00/0/0)
tcp 0 0 192.168.1.221:51952 192.168.1.222:9281 ESTABLISHED off (0.00/0/0)
tcp 0 0 192.168.1.221:22 192.168.1.210:49801 ESTABLISHED keepalive (6618.05/0/0)
tcp 0 64 192.168.1.221:22 192.168.1.210:59195 ESTABLISHED on (0.24/0/0)
tcp6 0 0 :::22 :::* LISTEN off (0.00/0/0)
tcp6 0 0 :::9444 :::* LISTEN off (0.00/0/0)
tcp6 0 0 192.168.1.221:9444 192.168.1.222:59046 ESTABLISHED off (0.00/0/0)
tcp6 0 0 192.168.1.221:9444 192.168.1.223:41976 ESTABLISHED off (0.00/0/0)
|ClickHouse Port |Description|
|--------|-------------|
|8443 | https interface|
|9010 | interserver https port|
|9281 | ClickHouse Keeper secure port|
|9440 | secure Native TCP protocol|
|9444 | ClickHouse Keeper Raft port |
Verify ClickHouse Keeper health
The typical
4 letter word (4lW)
commands will not work using
echo
without TLS, here is how to use the commands with
openssl
.
Start an interactive session with
openssl
bash
openssl s_client -connect chnode1.marsnet.local:9281
```response
CONNECTED(00000003)
depth=0 CN = chnode1
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = chnode1
verify error:num=21:unable to verify the first certificate
verify return:1
Certificate chain
0 s:CN = chnode1
i:CN = marsnet.local CA
Server certificate
-----BEGIN CERTIFICATE-----
MIICtDCCAZwCFD321grxU3G5pf6hjitf2u7vkusYMA0GCSqGSIb3DQEBCwUAMBsx
...
```
Send the 4LW commands in the openssl session
bash
mntr
```response | {"source_file": "configuring-ssl.md"} | [
0.026217980310320854,
-0.03687523305416107,
-0.0807238221168518,
-0.03653538227081299,
0.010030116885900497,
-0.060802143067121506,
-0.020564131438732147,
-0.08374352008104324,
-0.02948683127760887,
0.029790330678224564,
-0.013736111111938953,
0.007657236885279417,
-0.01784362457692623,
-0... |
e557e7d2-e6b8-4210-b31b-00bce3f7d003 | Send the 4LW commands in the openssl session
bash
mntr
```response
Post-Handshake New Session Ticket arrived:
SSL-Session:
Protocol : TLSv1.3
...
read R BLOCK
zk_version v22.7.3.5-stable-e140b8b5f3a5b660b6b576747063fd040f583cf3
zk_avg_latency 0
# highlight-next-line
zk_max_latency 4087
zk_min_latency 0
zk_packets_received 4565774
zk_packets_sent 4565773
zk_num_alive_connections 2
zk_outstanding_requests 0
# highlight-next-line
zk_server_state leader
zk_znode_count 1087
zk_watch_count 26
zk_ephemerals_count 12
zk_approximate_data_size 426062
zk_key_arena_size 258048
zk_latest_snapshot_size 0
zk_open_file_descriptor_count 187
zk_max_file_descriptor_count 18446744073709551615
# highlight-next-line
zk_followers 2
zk_synced_followers 1
closed
```
Start the ClickHouse client using
--secure
flag and SSL port:
```bash
root@chnode1:/etc/clickhouse-server# clickhouse-client --user default --password ClickHouse123! --port 9440 --secure --host chnode1.marsnet.local
ClickHouse client version 22.3.3.44 (official build).
Connecting to chnode1.marsnet.local:9440 as user default.
Connected to ClickHouse server version 22.3.3 revision 54455.
clickhouse :)
```
Log into the Play UI using the
https
interface at
https://chnode1.marsnet.local:8443/play
.
:::note
the browser will show an untrusted certificate since it is being reached from a workstation and the certificates are not in the root CA stores on the client machine.
When using certificates issued from a public authority or enterprise CA, it should show trusted.
:::
Create a replicated table:
sql
clickhouse :) CREATE TABLE repl_table ON CLUSTER cluster_1S_2R
(
id UInt64,
column1 Date,
column2 String
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/default/repl_table', '{replica}' )
ORDER BY (id);
response
ββhostβββββββββββββββββββ¬βportββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β chnode2.marsnet.local β 9440 β 0 β β 1 β 0 β
β chnode1.marsnet.local β 9440 β 0 β β 0 β 0 β
βββββββββββββββββββββββββ΄βββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
Add a couple rows on
chnode1
:
sql
INSERT INTO repl_table
(id, column1, column2)
VALUES
(1,'2022-04-01','abc'),
(2,'2022-04-02','def');
Verify the replication by viewing the rows on
chnode2
:
sql
SELECT * FROM repl_table
response
ββidββ¬ββββcolumn1ββ¬βcolumn2ββ
β 1 β 2022-04-01 β abc β
β 2 β 2022-04-02 β def β
ββββββ΄βββββββββββββ΄ββββββββββ
Summary {#summary} | {"source_file": "configuring-ssl.md"} | [
-0.020140131935477257,
0.1350858062505722,
-0.1032475084066391,
0.016570011153817177,
0.042783647775650024,
-0.09167318791151047,
-0.017751645296812057,
-0.0012379917316138744,
0.049862414598464966,
0.04150751605629921,
-0.05070694163441658,
-0.059621840715408325,
0.07164572924375534,
0.04... |
76f91949-9a3f-4b64-a069-4a071fa5761b | response
ββidββ¬ββββcolumn1ββ¬βcolumn2ββ
β 1 β 2022-04-01 β abc β
β 2 β 2022-04-02 β def β
ββββββ΄βββββββββββββ΄ββββββββββ
Summary {#summary}
This article focused on getting a ClickHouse environment configured with SSL/TLS. The settings will differ for different requirements in production environments; for example, certificate verification levels, protocols, ciphers, etc. But you should now have a good understanding of the steps involved in configuring and implementing secure connections. | {"source_file": "configuring-ssl.md"} | [
-0.06438150256872177,
0.013359579257667065,
-0.06508748978376389,
-0.022619135677814484,
-0.10430187731981277,
-0.04036460816860199,
0.015340711921453476,
-0.05188190937042236,
-0.001651165192015469,
-0.05131871625781059,
0.03516851365566254,
-0.01752813160419464,
0.05870623141527176,
0.00... |
79677552-2a09-4747-bd0d-8657465959b6 | slug: /security-and-authentication
title: 'Security and Authentication'
description: 'Landing page for Security and Authentication'
doc_type: 'landing-page'
keywords: ['security and authentication', 'access control', 'RBAC', 'user management', 'SRE guide']
| Page | Description |
|------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
|
Users and Roles
| Learn more about how ClickHouse supports access control management based on RBAC approach. |
|
External Authenticators
| Learn more about how OSS ClickHouse supports authenticating and managing users using external services. | | {"source_file": "index.md"} | [
-0.03845818340778351,
0.02549237385392189,
-0.10765707492828369,
-0.006053951568901539,
0.004672447685152292,
0.012096346355974674,
0.06003952398896217,
-0.020874977111816406,
-0.03744080290198326,
-0.0212114080786705,
0.036461781710386276,
0.01735484041273594,
0.08968460559844971,
-0.0193... |
f875af0e-5e47-42ad-8679-73214c0ef116 | slug: /guides/sre/scaling-clusters
sidebar_label: 'Rebalancing shards'
sidebar_position: 20
description: 'ClickHouse does not support automatic shard rebalancing, so we provide some best practices for how to rebalance shards.'
title: 'Rebalancing Data'
doc_type: 'guide'
keywords: ['scaling', 'clusters', 'horizontal scaling', 'capacity planning', 'performance']
Rebalancing data
ClickHouse does not support automatic shard rebalancing. However, there are ways to rebalance shards in order of preference:
Adjust the shard for the
distributed table
, allowing writes to be biased to the new shard. This potentially will cause load imbalances and hot spots on the cluster but can be viable in most scenarios where write throughput is not extremely high. It does not require the user to change their write target i.e. It can remain as the distributed table. This does not assist with rebalancing existing data.
As an alternative to (1), modify the existing cluster and write exclusively to the new shard until the cluster is balanced - manually weighting writes. This has the same limitations as (1).
If you need to rebalance existing data and you have partitioned your data, consider detaching partitions and manually relocating them to another node before reattaching to the new shard. This is more manual than subsequent techniques but may be faster and less resource-intensive. This is a manual operation and thus needs to consider the rebalancing of the data.
Export the data from the source cluster to the new cluster via an
INSERT FROM SELECT
. This will not be performant on very large datasets and will potentially incur significant IO on the source cluster and use considerable network resources. This represents a last resort. | {"source_file": "scaling-clusters.md"} | [
0.038870371878147125,
-0.09602338075637817,
-0.0056998105719685555,
0.04170551151037216,
-0.04137993976473808,
-0.04271595552563667,
-0.08408941328525543,
-0.01195767242461443,
-0.026213999837636948,
0.03603217378258705,
-0.003129939315840602,
0.04206134378910065,
0.050187453627586365,
-0.... |
c9ca9d48-377f-4cb1-ba09-793429dfe122 | sidebar_label: 'Primary indexes'
sidebar_position: 1
description: 'In this guide we are going to do a deep dive into ClickHouse indexing.'
title: 'A Practical Introduction to Primary Indexes in ClickHouse'
slug: /guides/best-practices/sparse-primary-indexes
show_related_blogs: true
doc_type: 'guide'
keywords: ['primary index', 'indexing', 'performance', 'query optimization', 'best practices'] | {"source_file": "sparse-primary-indexes.md"} | [
0.0019751698710024357,
-0.004169275518506765,
-0.0326056145131588,
0.06522678583860397,
0.010766949504613876,
-0.0039132144302129745,
-0.0028915535658597946,
0.0016766581684350967,
-0.08105399459600449,
0.011806457303464413,
0.000276321719866246,
0.07902000844478607,
0.031375642865896225,
... |
f3d46a99-36b7-4d96-aa98-a8885bb93301 | import sparsePrimaryIndexes01 from '@site/static/images/guides/best-practices/sparse-primary-indexes-01.png';
import sparsePrimaryIndexes02 from '@site/static/images/guides/best-practices/sparse-primary-indexes-02.png';
import sparsePrimaryIndexes03a from '@site/static/images/guides/best-practices/sparse-primary-indexes-03a.png';
import sparsePrimaryIndexes03b from '@site/static/images/guides/best-practices/sparse-primary-indexes-03b.png';
import sparsePrimaryIndexes04 from '@site/static/images/guides/best-practices/sparse-primary-indexes-04.png';
import sparsePrimaryIndexes05 from '@site/static/images/guides/best-practices/sparse-primary-indexes-05.png';
import sparsePrimaryIndexes06 from '@site/static/images/guides/best-practices/sparse-primary-indexes-06.png';
import sparsePrimaryIndexes07 from '@site/static/images/guides/best-practices/sparse-primary-indexes-07.png';
import sparsePrimaryIndexes08 from '@site/static/images/guides/best-practices/sparse-primary-indexes-08.png';
import sparsePrimaryIndexes09a from '@site/static/images/guides/best-practices/sparse-primary-indexes-09a.png';
import sparsePrimaryIndexes09b from '@site/static/images/guides/best-practices/sparse-primary-indexes-09b.png';
import sparsePrimaryIndexes09c from '@site/static/images/guides/best-practices/sparse-primary-indexes-09c.png';
import sparsePrimaryIndexes10 from '@site/static/images/guides/best-practices/sparse-primary-indexes-10.png';
import sparsePrimaryIndexes11 from '@site/static/images/guides/best-practices/sparse-primary-indexes-11.png';
import sparsePrimaryIndexes12a from '@site/static/images/guides/best-practices/sparse-primary-indexes-12a.png';
import sparsePrimaryIndexes12b1 from '@site/static/images/guides/best-practices/sparse-primary-indexes-12b-1.png';
import sparsePrimaryIndexes12b2 from '@site/static/images/guides/best-practices/sparse-primary-indexes-12b-2.png';
import sparsePrimaryIndexes12c1 from '@site/static/images/guides/best-practices/sparse-primary-indexes-12c-1.png';
import sparsePrimaryIndexes12c2 from '@site/static/images/guides/best-practices/sparse-primary-indexes-12c-2.png';
import sparsePrimaryIndexes13a from '@site/static/images/guides/best-practices/sparse-primary-indexes-13a.png';
import sparsePrimaryIndexes14a from '@site/static/images/guides/best-practices/sparse-primary-indexes-14a.png';
import sparsePrimaryIndexes14b from '@site/static/images/guides/best-practices/sparse-primary-indexes-14b.png';
import sparsePrimaryIndexes15a from '@site/static/images/guides/best-practices/sparse-primary-indexes-15a.png';
import sparsePrimaryIndexes15b from '@site/static/images/guides/best-practices/sparse-primary-indexes-15b.png';
import Image from '@theme/IdealImage';
A practical introduction to primary indexes in ClickHouse
Introduction {#introduction} | {"source_file": "sparse-primary-indexes.md"} | [
-0.020824622362852097,
0.005874363239854574,
-0.008463561534881592,
-0.0005349886487238109,
0.05472714081406593,
-0.09356056898832321,
-0.026762256398797035,
0.08493663370609283,
-0.06779446452856064,
-0.001362198730930686,
0.05863413214683533,
0.05788011848926544,
0.08421875536441803,
0.0... |
2498275d-e4a3-40a4-aec9-2fc13a1708b9 | A practical introduction to primary indexes in ClickHouse
Introduction {#introduction}
In this guide we are going to do a deep dive into ClickHouse indexing. We will illustrate and discuss in detail:
-
how indexing in ClickHouse is different from traditional relational database management systems
-
how ClickHouse is building and using a table's sparse primary index
-
what some of the best practices are for indexing in ClickHouse
You can optionally execute all ClickHouse SQL statements and queries given in this guide by yourself on your own machine.
For installation of ClickHouse and getting started instructions, see the
Quick Start
.
:::note
This guide is focusing on ClickHouse sparse primary indexes.
For ClickHouse
secondary data skipping indexes
, see the
Tutorial
.
:::
Data set {#data-set}
Throughout this guide we will use a sample anonymized web traffic data set.
We will use a subset of 8.87 million rows (events) from the sample data set.
The uncompressed data size is 8.87 million events and about 700 MB. This compresses to 200 mb when stored in ClickHouse.
In our subset, each row contains three columns that indicate an internet user (
UserID
column) who clicked on a URL (
URL
column) at a specific time (
EventTime
column).
With these three columns we can already formulate some typical web analytics queries such as:
"What are the top 10 most clicked urls for a specific user?"
"What are the top 10 users that most frequently clicked a specific URL?"
"What are the most popular times (e.g. days of the week) at which a user clicks on a specific URL?"
Test machine {#test-machine}
All runtime numbers given in this document are based on running ClickHouse 22.2.1 locally on a MacBook Pro with the Apple M1 Pro chip and 16GB of RAM.
A full table scan {#a-full-table-scan}
In order to see how a query is executed over our data set without a primary key, we create a table (with a MergeTree table engine) by executing the following SQL DDL statement:
sql
CREATE TABLE hits_NoPrimaryKey
(
`UserID` UInt32,
`URL` String,
`EventTime` DateTime
)
ENGINE = MergeTree
PRIMARY KEY tuple();
Next insert a subset of the hits data set into the table with the following SQL insert statement.
This uses the
URL table function
in order to load a subset of the full dataset hosted remotely at clickhouse.com: | {"source_file": "sparse-primary-indexes.md"} | [
-0.00441155768930912,
-0.04324952885508537,
-0.012373872101306915,
0.02876058965921402,
-0.03229628503322601,
-0.04516781494021416,
0.08506372570991516,
-0.026914065703749657,
-0.03175165131688118,
-0.013056918047368526,
0.010812497697770596,
0.004768943879753351,
0.025012822821736336,
-0.... |
8f6e6a6b-773c-4e4b-acec-6c8f35ccba74 | sql
INSERT INTO hits_NoPrimaryKey SELECT
intHash32(UserID) AS UserID,
URL,
EventTime | {"source_file": "sparse-primary-indexes.md"} | [
0.09057554602622986,
-0.02524789422750473,
-0.03089599683880806,
0.010547135956585407,
-0.08175332099199295,
0.006284271366894245,
0.07712820917367935,
0.053295034915208817,
0.03339027613401413,
0.03232809901237488,
0.017719725146889687,
-0.071123868227005,
0.06953337788581848,
-0.06715510... |
c54aa3de-0172-469e-ad1b-3ba4206258a3 | URL,
EventTime
FROM url('https://datasets.clickhouse.com/hits/tsv/hits_v1.tsv.xz', 'TSV', 'WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8')
WHERE URL != ''; | {"source_file": "sparse-primary-indexes.md"} | [
-0.02053631842136383,
0.025427594780921936,
-0.05260937660932541,
-0.02791542001068592,
-0.03164270892739296,
-0.005069846753031015,
0.016173189505934715,
0.006853423546999693,
0.01912057213485241,
0.01161902118474245,
-0.016892213374376297,
-0.043817732483148575,
-0.034272149205207825,
-0... |
b0fb2b92-cc5b-44f5-b3c1-fe8e00eb466b | The response is:
```response
Ok.
0 rows in set. Elapsed: 145.993 sec. Processed 8.87 million rows, 18.40 GB (60.78 thousand rows/s., 126.06 MB/s.)
```
ClickHouse client's result output shows us that the statement above inserted 8.87 million rows into the table.
Lastly, in order to simplify the discussions later on in this guide and to make the diagrams and results reproducible, we
optimize
the table using the FINAL keyword:
sql
OPTIMIZE TABLE hits_NoPrimaryKey FINAL;
:::note
In general it is not required nor recommended to immediately optimize a table
after loading data into it. Why this is necessary for this example will become apparent.
:::
Now we execute our first web analytics query. The following is calculating the top 10 most clicked urls for the internet user with the UserID 749927693:
sql
SELECT URL, count(URL) AS Count
FROM hits_NoPrimaryKey
WHERE UserID = 749927693
GROUP BY URL
ORDER BY Count DESC
LIMIT 10;
The response is:
```response
ββURLβββββββββββββββββββββββββββββ¬βCountββ
β http://auto.ru/chatay-barana.. β 170 β
β http://auto.ru/chatay-id=371...β 52 β
β http://public_search β 45 β
β http://kovrik-medvedevushku-...β 36 β
β http://forumal β 33 β
β http://korablitz.ru/L_1OFFER...β 14 β
β http://auto.ru/chatay-id=371...β 14 β
β http://auto.ru/chatay-john-D...β 13 β
β http://auto.ru/chatay-john-D...β 10 β
β http://wot/html?page/23600_m...β 9 β
ββββββββββββββββββββββββββββββββββ΄ββββββββ
10 rows in set. Elapsed: 0.022 sec.
highlight-next-line
Processed 8.87 million rows,
70.45 MB (398.53 million rows/s., 3.17 GB/s.)
```
ClickHouse client's result output indicates that ClickHouse executed a full table scan! Each single row of the 8.87 million rows of our table was streamed into ClickHouse. That doesn't scale.
To make this (way) more efficient and (much) faster, we need to use a table with a appropriate primary key. This will allow ClickHouse to automatically (based on the primary key's column(s)) create a sparse primary index which can then be used to significantly speed up the execution of our example query.
ClickHouse index design {#clickhouse-index-design}
An index design for massive data scales {#an-index-design-for-massive-data-scales} | {"source_file": "sparse-primary-indexes.md"} | [
-0.004325896967202425,
0.017210662364959717,
-0.05064507573843002,
0.07637381553649902,
-0.05972222983837128,
-0.09010137617588043,
0.06521986424922943,
0.04415563866496086,
0.00831539835780859,
0.07192867249250412,
0.04063820466399193,
-0.013566849753260612,
0.08486825227737427,
-0.081972... |
18f00bf2-469f-4da3-93cc-0bd95c7e6b64 | ClickHouse index design {#clickhouse-index-design}
An index design for massive data scales {#an-index-design-for-massive-data-scales}
In traditional relational database management systems, the primary index would contain one entry per table row. This would result in the primary index containing 8.87 million entries for our data set. Such an index allows the fast location of specific rows, resulting in high efficiency for lookup queries and point updates. Searching an entry in a
B(+)-Tree
data structure has an average time complexity of
O(log n)
; more precisely,
log_b n = log_2 n / log_2 b
where
b
is the branching factor of the
B(+)-Tree
and
n
is the number of indexed rows. Because
b
is typically between several hundred and several thousand,
B(+)-Trees
are very shallow structures, and few disk-seeks are required to locate records. With 8.87 million rows and a branching factor of 1000, 2.3 disk seeks are needed on average. This capability comes at a cost: additional disk and memory overheads, higher insertion costs when adding new rows to the table and entries to the index, and sometimes rebalancing of the B-Tree.
Considering the challenges associated with B-Tree indexes, table engines in ClickHouse utilise a different approach. The ClickHouse
MergeTree Engine Family
has been designed and optimized to handle massive data volumes. These tables are designed to receive millions of row inserts per second and store very large (100s of Petabytes) volumes of data. Data is quickly written to a table
part by part
, with rules applied for merging the parts in the background. In ClickHouse each part has its own primary index. When parts are merged, then the merged part's primary indexes are also merged. At the very large scale that ClickHouse is designed for, it is paramount to be very disk and memory efficient. Therefore, instead of indexing every row, the primary index for a part has one index entry (known as a 'mark') per group of rows (called 'granule') - this technique is called
sparse index
.
Sparse indexing is possible because ClickHouse is storing the rows for a part on disk ordered by the primary key column(s). Instead of directly locating single rows (like a B-Tree based index), the sparse primary index allows it to quickly (via a binary search over index entries) identify groups of rows that could possibly match the query. The located groups of potentially matching rows (granules) are then in parallel streamed into the ClickHouse engine in order to find the matches. This index design allows for the primary index to be small (it can, and must, completely fit into the main memory), whilst still significantly speeding up query execution times: especially for range queries that are typical in data analytics use cases. | {"source_file": "sparse-primary-indexes.md"} | [
0.01740027777850628,
-0.021225905045866966,
-0.022403297945857048,
0.03630664199590683,
-0.009130147285759449,
-0.06325675547122955,
-0.033943794667720795,
0.032099418342113495,
0.08374184370040894,
0.059264834970235825,
-0.056269820779561996,
0.02169719710946083,
0.010804112069308758,
-0.... |
830ae32b-1807-468c-8526-cc4ecf39addb | The following illustrates in detail how ClickHouse is building and using its sparse primary index. Later on in the article, we will discuss some best practices for choosing, removing, and ordering the table columns that are used to build the index (primary key columns).
A table with a primary key {#a-table-with-a-primary-key}
Create a table that has a compound primary key with key columns UserID and URL:
sql
CREATE TABLE hits_UserID_URL
(
`UserID` UInt32,
`URL` String,
`EventTime` DateTime
)
ENGINE = MergeTree
-- highlight-next-line
PRIMARY KEY (UserID, URL)
ORDER BY (UserID, URL, EventTime)
SETTINGS index_granularity = 8192, index_granularity_bytes = 0, compress_primary_key = 0;
DDL Statement Details
In order to simplify the discussions later on in this guide, as well as make the diagrams and results reproducible, the DDL statement:
Specifies a compound sorting key for the table via an
ORDER BY
clause.
Explicitly controls how many index entries the primary index will have through the settings:
index_granularity
: explicitly set to its default value of 8192. This means that for each group of 8192 rows, the primary index will have one index entry. For example, if the table contains 16384 rows, the index will have two index entries.
index_granularity_bytes
: set to 0 in order to disable
adaptive index granularity
. Adaptive index granularity means that ClickHouse automatically creates one index entry for a group of n rows if either of these are true:
If
n
is less than 8192 and the size of the combined row data for that
n
rows is larger than or equal to 10 MB (the default value for
index_granularity_bytes
).
If the combined row data size for
n
rows is less than 10 MB but
n
is 8192.
compress_primary_key
: set to 0 to disable
compression of the primary index
. This will allow us to optionally inspect its contents later.
The primary key in the DDL statement above causes the creation of the primary index based on the two specified key columns.
Next insert the data: | {"source_file": "sparse-primary-indexes.md"} | [
0.02004818432033062,
-0.022077739238739014,
0.0026327017694711685,
0.007364329416304827,
-0.04190598055720329,
-0.05920262262225151,
-0.0000046221393859013915,
0.004810916259884834,
-0.008915857411921024,
0.0224621444940567,
-0.01531343162059784,
0.030692484229803085,
0.010716482065618038,
... |
9c63cd49-6615-4392-a448-6e3315c9185b | sql
INSERT INTO hits_UserID_URL SELECT
intHash32(UserID) AS UserID,
URL,
EventTime | {"source_file": "sparse-primary-indexes.md"} | [
0.07283546030521393,
-0.025776691734790802,
-0.003782623913139105,
0.007967323996126652,
-0.09595434367656708,
0.004049547947943211,
0.05368449538946152,
0.037495702505111694,
0.013389994390308857,
0.009684324264526367,
-0.0313720628619194,
-0.066826231777668,
0.049880579113960266,
-0.0488... |
e0436b57-4d5b-4811-b602-0fbcbb227e1c | URL,
EventTime
FROM url('https://datasets.clickhouse.com/hits/tsv/hits_v1.tsv.xz', 'TSV', 'WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8')
WHERE URL != ''; | {"source_file": "sparse-primary-indexes.md"} | [
-0.02053631842136383,
0.025427594780921936,
-0.05260937660932541,
-0.02791542001068592,
-0.03164270892739296,
-0.005069846753031015,
0.016173189505934715,
0.006853423546999693,
0.01912057213485241,
0.01161902118474245,
-0.016892213374376297,
-0.043817732483148575,
-0.034272149205207825,
-0... |
49e2d1f0-bfcc-4342-8145-8c78b395616e | The response looks like:
response
0 rows in set. Elapsed: 149.432 sec. Processed 8.87 million rows, 18.40 GB (59.38 thousand rows/s., 123.16 MB/s.)
And optimize the table:
sql
OPTIMIZE TABLE hits_UserID_URL FINAL;
We can use the following query to obtain metadata about our table:
sql
SELECT
part_type,
path,
formatReadableQuantity(rows) AS rows,
formatReadableSize(data_uncompressed_bytes) AS data_uncompressed_bytes,
formatReadableSize(data_compressed_bytes) AS data_compressed_bytes,
formatReadableSize(primary_key_bytes_in_memory) AS primary_key_bytes_in_memory,
marks,
formatReadableSize(bytes_on_disk) AS bytes_on_disk
FROM system.parts
WHERE (table = 'hits_UserID_URL') AND (active = 1)
FORMAT Vertical;
The response is:
```response
part_type: Wide
path: ./store/d9f/d9f36a1a-d2e6-46d4-8fb5-ffe9ad0d5aed/all_1_9_2/
rows: 8.87 million
data_uncompressed_bytes: 733.28 MiB
data_compressed_bytes: 206.94 MiB
primary_key_bytes_in_memory: 96.93 KiB
marks: 1083
bytes_on_disk: 207.07 MiB
1 rows in set. Elapsed: 0.003 sec.
```
The output of the ClickHouse client shows:
The table's data is stored in
wide format
in a specific directory on disk meaning that there will be one data file (and one mark file) per table column inside that directory.
The table has 8.87 million rows.
The uncompressed data size of all rows together is 733.28 MB.
The compressed size on disk of all rows together is 206.94 MB.
The table has a primary index with 1083 entries (called 'marks') and the size of the index is 96.93 KB.
In total, the table's data and mark files and primary index file together take 207.07 MB on disk.
Data is stored on disk ordered by primary key column(s) {#data-is-stored-on-disk-ordered-by-primary-key-columns}
Our table that we created above has
- a compound
primary key
(UserID, URL)
and
- a compound
sorting key
(UserID, URL, EventTime)
.
:::note
- If we would have specified only the sorting key, then the primary key would be implicitly defined to be equal to the sorting key.
In order to be memory efficient we explicitly specified a primary key that only contains columns that our queries are filtering on. The primary index that is based on the primary key is completely loaded into the main memory.
In order to have consistency in the guide's diagrams and in order to maximise compression ratio we defined a separate sorting key that includes all of our table's columns (if in a column similar data is placed close to each other, for example via sorting, then that data will be compressed better).
The primary key needs to be a prefix of the sorting key if both are specified.
:::
The inserted rows are stored on disk in lexicographical order (ascending) by the primary key columns (and the additional
EventTime
column from the sorting key). | {"source_file": "sparse-primary-indexes.md"} | [
0.06532230228185654,
0.06368128210306168,
-0.08179998397827148,
0.06473889201879501,
-0.022877385839819908,
-0.10917222499847412,
0.027098873630166054,
0.07880228757858276,
-0.02431255392730236,
0.06001274660229683,
-0.020516416057944298,
-0.009251775220036507,
0.06971833109855652,
-0.0656... |
c1e5a186-2565-44da-80bc-d75a10076bdc | The inserted rows are stored on disk in lexicographical order (ascending) by the primary key columns (and the additional
EventTime
column from the sorting key).
:::note
ClickHouse allows inserting multiple rows with identical primary key column values. In this case (see row 1 and row 2 in the diagram below), the final order is determined by the specified sorting key and therefore the value of the
EventTime
column.
:::
ClickHouse is a
column-oriented database management system
. As shown in the diagram below
- for the on disk representation, there is a single data file (*.bin) per table column where all the values for that column are stored in a
compressed
format, and
- the 8.87 million rows are stored on disk in lexicographic ascending order by the primary key columns (and the additional sort key columns) i.e. in this case
- first by
UserID
,
- then by
URL
,
- and lastly by
EventTime
:
UserID.bin
,
URL.bin
, and
EventTime.bin
are the data files on disk where the values of the
UserID
,
URL
, and
EventTime
columns are stored.
:::note
- As the primary key defines the lexicographical order of the rows on disk, a table can only have one primary key.
We are numbering rows starting with 0 in order to be aligned with the ClickHouse internal row numbering scheme that is also used for logging messages.
:::
Data is organized into granules for parallel data processing {#data-is-organized-into-granules-for-parallel-data-processing}
For data processing purposes, a table's column values are logically divided into granules.
A granule is the smallest indivisible data set that is streamed into ClickHouse for data processing.
This means that instead of reading individual rows, ClickHouse is always reading (in a streaming fashion and in parallel) a whole group (granule) of rows.
:::note
Column values are not physically stored inside granules: granules are just a logical organization of the column values for query processing.
:::
The following diagram shows how the (column values of) 8.87 million rows of our table
are organized into 1083 granules, as a result of the table's DDL statement containing the setting
index_granularity
(set to its default value of 8192).
The first (based on physical order on disk) 8192 rows (their column values) logically belong to granule 0, then the next 8192 rows (their column values) belong to granule 1 and so on.
:::note
- The last granule (granule 1082) "contains" less than 8192 rows.
We mentioned in the beginning of this guide in the "DDL Statement Details", that we disabled
adaptive index granularity
(in order to simplify the discussions in this guide, as well as make the diagrams and results reproducible).
Therefore all granules (except the last one) of our example table have the same size. | {"source_file": "sparse-primary-indexes.md"} | [
-0.0069785150699317455,
-0.023541927337646484,
-0.1073489859700203,
-0.004244364798069,
-0.018079906702041626,
-0.0416116900742054,
0.03963306546211243,
-0.015869125723838806,
0.09452589601278305,
0.09865570813417435,
0.05829779431223869,
0.09399856626987457,
0.06560903042554855,
-0.088501... |
e6f69b11-e022-4009-b1b5-ff0998ba2458 | Therefore all granules (except the last one) of our example table have the same size.
For tables with adaptive index granularity (index granularity is adaptive by
default
the size of some granules can be less than 8192 rows depending on the row data sizes.
We marked some column values from our primary key columns (
UserID
,
URL
) in orange.
These orange-marked column values are the primary key column values of each first row of each granule.
As we will see below, these orange-marked column values will be the entries in the table's primary index.
We are numbering granules starting with 0 in order to be aligned with the ClickHouse internal numbering scheme that is also used for logging messages.
:::
The primary index has one entry per granule {#the-primary-index-has-one-entry-per-granule}
The primary index is created based on the granules shown in the diagram above. This index is an uncompressed flat array file (primary.idx), containing so-called numerical index marks starting at 0.
The diagram below shows that the index stores the primary key column values (the values marked in orange in the diagram above) for each first row for each granule.
Or in other words: the primary index stores the primary key column values from each 8192nd row of the table (based on the physical row order defined by the primary key columns).
For example
- the first index entry ('mark 0' in the diagram below) is storing the key column values of the first row of granule 0 from the diagram above,
- the second index entry ('mark 1' in the diagram below) is storing the key column values of the first row of granule 1 from the diagram above, and so on.
In total the index has 1083 entries for our table with 8.87 million rows and 1083 granules:
:::note
- For tables with
adaptive index granularity
, there is also one "final" additional mark stored in the primary index that records the values of the primary key columns of the last table row, but because we disabled adaptive index granularity (in order to simplify the discussions in this guide, as well as make the diagrams and results reproducible), the index of our example table doesn't include this final mark.
The primary index file is completely loaded into the main memory. If the file is larger than the available free memory space then ClickHouse will raise an error.
:::
Inspecting the content of the primary index
On a self-managed ClickHouse cluster we can use the
file table function
for inspecting the content of the primary index of our example table.
For that we first need to copy the primary index file into the
user_files_path
of a node from the running cluster:
Step 1: Get part-path that contains the primary index file
`
SELECT path FROM system.parts WHERE table = 'hits_UserID_URL' AND active = 1
`
returns `/Users/tomschreiber/Clickhouse/store/85f/85f4ee68-6e28-4f08-98b1-7d8affa1d88c/all_1_9_4` on the test machine. | {"source_file": "sparse-primary-indexes.md"} | [
0.02235247753560543,
0.011417318135499954,
-0.02705838717520237,
0.0010805854108184576,
-0.010078120045363903,
-0.08375565707683563,
0.08035522699356079,
-0.003898417577147484,
0.04733947291970253,
-0.04585561156272888,
-0.04426591098308563,
0.020841864868998528,
0.05594908446073532,
-0.05... |
64af6093-3bf6-4142-ac95-dc722bf1ed4f | returns `/Users/tomschreiber/Clickhouse/store/85f/85f4ee68-6e28-4f08-98b1-7d8affa1d88c/all_1_9_4` on the test machine.
Step 2: Get user_files_path
The
default user_files_path
on Linux is
`/var/lib/clickhouse/user_files/`
and on Linux you can check if it got changed: `$ grep user_files_path /etc/clickhouse-server/config.xml`
On the test machine the path is `/Users/tomschreiber/Clickhouse/user_files/`
Step 3: Copy the primary index file into the user_files_path
`cp /Users/tomschreiber/Clickhouse/store/85f/85f4ee68-6e28-4f08-98b1-7d8affa1d88c/all_1_9_4/primary.idx /Users/tomschreiber/Clickhouse/user_files/primary-hits_UserID_URL.idx`
Now we can inspect the content of the primary index via SQL:
Get amount of entries
`
SELECT count( )
FROM file('primary-hits_UserID_URL.idx', 'RowBinary', 'UserID UInt32, URL String');
`
returns `1083`
Get first two index marks
`
SELECT UserID, URL
FROM file('primary-hits_UserID_URL.idx', 'RowBinary', 'UserID UInt32, URL String')
LIMIT 0, 2;
`
returns
`
240923, http://showtopics.html%3...
4073710, http://mk.ru&pos=3_0
`
Get last index mark
`
SELECT UserID, URL FROM file('primary-hits_UserID_URL.idx', 'RowBinary', 'UserID UInt32, URL String')
LIMIT 1082, 1;
`
returns
`
4292714039 β http://sosyal-mansetleri...
`
This matches exactly our diagram of the primary index content for our example table:
The primary key entries are called index marks because each index entry is marking the start of a specific data range. Specifically for the example table:
- UserID index marks:
The stored
UserID
values in the primary index are sorted in ascending order.
'mark 1' in the diagram above thus indicates that the
UserID
values of all table rows in granule 1, and in all following granules, are guaranteed to be greater than or equal to 4.073.710.
As we will see later
, this global order enables ClickHouse to
use a binary search algorithm
over the index marks for the first key column when a query is filtering on the first column of the primary key.
URL index marks:
The quite similar cardinality of the primary key columns
UserID
and
URL
means that the index marks for all key columns after the first column in general only indicate a data range as long as the predecessor key column value stays the same for all table rows within at least the current granule.
For example, because the UserID values of mark 0 and mark 1 are different in the diagram above, ClickHouse can't assume that all URL values of all table rows in granule 0 are larger or equal to
'http://showtopics.html%3...'
. However, if the UserID values of mark 0 and mark 1 would be the same in the diagram above (meaning that the UserID value stays the same for all table rows within the granule 0), the ClickHouse could assume that all URL values of all table rows in granule 0 are larger or equal to
'http://showtopics.html%3...'
. | {"source_file": "sparse-primary-indexes.md"} | [
0.011401180177927017,
-0.03652948886156082,
-0.05230403319001198,
-0.047578662633895874,
0.025455331429839134,
-0.07811136543750763,
0.040234412997961044,
0.010661173611879349,
-0.02516311965882778,
-0.03124529868364334,
0.13171429932117462,
-0.036142412573099136,
-0.01601085253059864,
-0.... |
077fc1b1-9eaa-4e78-a9ac-eb6c3d3c7071 | We will discuss the consequences of this on query execution performance in more detail later.
The primary index is used for selecting granules {#the-primary-index-is-used-for-selecting-granules}
We can now execute our queries with support from the primary index.
The following calculates the top 10 most clicked urls for the UserID 749927693.
sql
SELECT URL, count(URL) AS Count
FROM hits_UserID_URL
WHERE UserID = 749927693
GROUP BY URL
ORDER BY Count DESC
LIMIT 10;
The response is:
```response
ββURLβββββββββββββββββββββββββββββ¬βCountββ
β http://auto.ru/chatay-barana.. β 170 β
β http://auto.ru/chatay-id=371...β 52 β
β http://public_search β 45 β
β http://kovrik-medvedevushku-...β 36 β
β http://forumal β 33 β
β http://korablitz.ru/L_1OFFER...β 14 β
β http://auto.ru/chatay-id=371...β 14 β
β http://auto.ru/chatay-john-D...β 13 β
β http://auto.ru/chatay-john-D...β 10 β
β http://wot/html?page/23600_m...β 9 β
ββββββββββββββββββββββββββββββββββ΄ββββββββ
10 rows in set. Elapsed: 0.005 sec.
highlight-next-line
Processed 8.19 thousand rows,
740.18 KB (1.53 million rows/s., 138.59 MB/s.)
```
The output for the ClickHouse client is now showing that instead of doing a full table scan, only 8.19 thousand rows were streamed into ClickHouse.
If
trace logging
is enabled then the ClickHouse server log file shows that ClickHouse was running a
binary search
over the 1083 UserID index marks, in order to identify granules that possibly can contain rows with a UserID column value of
749927693
. This requires 19 steps with an average time complexity of
O(log2 n)
:
```response
...Executor): Key condition: (column 0 in [749927693, 749927693])
highlight-next-line
...Executor): Running binary search on index range for part all_1_9_2 (1083 marks)
...Executor): Found (LEFT) boundary mark: 176
...Executor): Found (RIGHT) boundary mark: 177
...Executor): Found continuous range in 19 steps
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
highlight-next-line
1/1083 marks by primary key, 1 marks to read from 1 ranges
...Reading ...approx. 8192 rows starting from 1441792
```
We can see in the trace log above, that one mark out of the 1083 existing marks satisfied the query.
Trace Log Details
Mark 176 was identified (the 'found left boundary mark' is inclusive, the 'found right boundary mark' is exclusive), and therefore all 8192 rows from granule 176 (which starts at row 1.441.792 - we will see that later on in this guide) are then streamed into ClickHouse in order to find the actual rows with a UserID column value of `749927693`.
We can also reproduce this by using the
EXPLAIN clause
in our example query:
sql
EXPLAIN indexes = 1
SELECT URL, count(URL) AS Count
FROM hits_UserID_URL
WHERE UserID = 749927693
GROUP BY URL
ORDER BY Count DESC
LIMIT 10;
The response looks like: | {"source_file": "sparse-primary-indexes.md"} | [
-0.027841638773679733,
-0.053754210472106934,
-0.008958155289292336,
0.036776985973119736,
-0.012388390488922596,
-0.04028112441301346,
0.047095075249671936,
0.03910674899816513,
0.05592867359519005,
-0.0156857967376709,
-0.04896179586648941,
0.01689971424639225,
0.041329074651002884,
-0.0... |
9dd0e070-31fd-4ea5-a83c-8593a45d6df6 | sql
EXPLAIN indexes = 1
SELECT URL, count(URL) AS Count
FROM hits_UserID_URL
WHERE UserID = 749927693
GROUP BY URL
ORDER BY Count DESC
LIMIT 10;
The response looks like:
```response
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Expression (Projection) β
β Limit (preliminary LIMIT (without OFFSET)) β
β Sorting (Sorting for ORDER BY) β
β Expression (Before ORDER BY) β
β Aggregating β
β Expression (Before GROUP BY) β
β Filter (WHERE) β
β SettingQuotaAndLimits (Set limits and quota after reading from storage) β
β ReadFromMergeTree β
β Indexes: β
β PrimaryKey β
β Keys: β
β UserID β
β Condition: (UserID in [749927693, 749927693]) β
β Parts: 1/1 β
highlight-next-line
β Granules: 1/1083 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
16 rows in set. Elapsed: 0.003 sec.
```
The client output is showing that one out of the 1083 granules was selected as possibly containing rows with a UserID column value of 749927693.
:::note Conclusion
When a query is filtering on a column that is part of a compound key and is the first key column, then ClickHouse is running the binary search algorithm over the key column's index marks.
:::
As discussed above, ClickHouse is using its sparse primary index for quickly (via binary search) selecting granules that could possibly contain rows that match a query.
This is the
first stage (granule selection)
of ClickHouse query execution.
In the
second stage (data reading)
, ClickHouse is locating the selected granules in order to stream all their rows into the ClickHouse engine in order to find the rows that are actually matching the query.
We discuss that second stage in more detail in the following section.
Mark files are used for locating granules {#mark-files-are-used-for-locating-granules}
The following diagram illustrates a part of the primary index file for our table. | {"source_file": "sparse-primary-indexes.md"} | [
0.0077803898602724075,
0.0016711781499907374,
0.026708455756306648,
0.05453414469957352,
-0.029243677854537964,
-0.059992145746946335,
0.09927738457918167,
0.00044174690265208483,
0.061617180705070496,
0.02615431137382984,
0.017403321340680122,
-0.023168915882706642,
0.10277226567268372,
-... |
8d6fed56-574f-46b1-83b0-866c562b19c2 | Mark files are used for locating granules {#mark-files-are-used-for-locating-granules}
The following diagram illustrates a part of the primary index file for our table.
As discussed above, via a binary search over the index's 1083 UserID marks, mark 176 was identified. Its corresponding granule 176 can therefore possibly contain rows with a UserID column value of 749.927.693.
Granule Selection Details
The diagram above shows that mark 176 is the first index entry where both the minimum UserID value of the associated granule 176 is smaller than 749.927.693, and the minimum UserID value of granule 177 for the next mark (mark 177) is greater than this value. Therefore only the corresponding granule 176 for mark 176 can possibly contain rows with a UserID column value of 749.927.693.
In order to confirm (or not) that some row(s) in granule 176 contain a UserID column value of 749.927.693, all 8192 rows belonging to this granule need to be streamed into ClickHouse.
To achieve this, ClickHouse needs to know the physical location of granule 176.
In ClickHouse the physical locations of all granules for our table are stored in mark files. Similar to data files, there is one mark file per table column.
The following diagram shows the three mark files
UserID.mrk
,
URL.mrk
, and
EventTime.mrk
that store the physical locations of the granules for the table's
UserID
,
URL
, and
EventTime
columns.
We have discussed how the primary index is a flat uncompressed array file (primary.idx), containing index marks that are numbered starting at 0.
Similarly, a mark file is also a flat uncompressed array file (*.mrk) containing marks that are numbered starting at 0.
Once ClickHouse has identified and selected the index mark for a granule that can possibly contain matching rows for a query, a positional array lookup can be performed in the mark files in order to obtain the physical locations of the granule.
Each mark file entry for a specific column is storing two locations in the form of offsets:
The first offset ('block_offset' in the diagram above) is locating the
block
in the
compressed
column data file that contains the compressed version of the selected granule. This compressed block potentially contains a few compressed granules. The located compressed file block is uncompressed into the main memory on read.
The second offset ('granule_offset' in the diagram above) from the mark-file provides the location of the granule within the uncompressed block data.
All the 8192 rows belonging to the located uncompressed granule are then streamed into ClickHouse for further processing.
:::note
For tables with
wide format
and without
adaptive index granularity
, ClickHouse uses
.mrk
mark files as visualised above, that contain entries with two 8 byte long addresses per entry. These entries are physical locations of granules that all have the same size. | {"source_file": "sparse-primary-indexes.md"} | [
0.014697262085974216,
0.010525993071496487,
-0.05155588313937187,
-0.03604493662714958,
0.027864858508110046,
-0.054514605551958084,
0.1076476126909256,
0.03697482496500015,
-0.00005664554191753268,
-0.035401612520217896,
-0.03028189204633236,
-0.02357795648276806,
0.08990738540887833,
-0.... |
7d532e55-79b6-4ba8-9ef5-7624c43cec8b | Index granularity is adaptive by
default
, but for our example table we disabled adaptive index granularity (in order to simplify the discussions in this guide, as well as make the diagrams and results reproducible). Our table is using wide format because the size of the data is larger than
min_bytes_for_wide_part
(which is 10 MB by default for self-managed clusters).
For tables with wide format and with adaptive index granularity, ClickHouse uses
.mrk2
mark files, that contain similar entries to
.mrk
mark files but with an additional third value per entry: the number of rows of the granule that the current entry is associated with.
For tables with
compact format
, ClickHouse uses
.mrk3
mark files.
:::
:::note Why Mark Files
Why does the primary index not directly contain the physical locations of the granules that are corresponding to index marks?
Because at that very large scale that ClickHouse is designed for, it is important to be very disk and memory efficient.
The primary index file needs to fit into the main memory.
For our example query, ClickHouse used the primary index and selected a single granule that can possibly contain rows matching our query. Only for that one granule does ClickHouse then need the physical locations in order to stream the corresponding rows for further processing.
Furthermore, this offset information is only needed for the UserID and URL columns.
Offset information is not needed for columns that are not used in the query e.g. the
EventTime
.
For our sample query, ClickHouse needs only the two physical location offsets for granule 176 in the UserID data file (UserID.bin) and the two physical location offsets for granule 176 in the URL data file (URL.bin).
The indirection provided by mark files avoids storing, directly within the primary index, entries for the physical locations of all 1083 granules for all three columns: thus avoiding having unnecessary (potentially unused) data in main memory.
:::
The following diagram and the text below illustrate how for our example query ClickHouse locates granule 176 in the UserID.bin data file.
We discussed earlier in this guide that ClickHouse selected the primary index mark 176 and therefore granule 176 as possibly containing matching rows for our query.
ClickHouse now uses the selected mark number (176) from the index for a positional array lookup in the UserID.mrk mark file in order to get the two offsets for locating granule 176.
As shown, the first offset is locating the compressed file block within the UserID.bin data file that in turn contains the compressed version of granule 176.
Once the located file block is uncompressed into the main memory, the second offset from the mark file can be used to locate granule 176 within the uncompressed data. | {"source_file": "sparse-primary-indexes.md"} | [
0.014780945144593716,
-0.0026188676711171865,
0.002962471218779683,
0.05079694837331772,
0.04865778982639313,
-0.07297579944133759,
-0.009711714461445808,
0.031231965869665146,
-0.012763004750013351,
-0.01780475303530693,
-0.0517578125,
0.03739646449685097,
0.03439712151885033,
-0.04999159... |
46c48096-37fc-44d8-b375-a689c891984f | Once the located file block is uncompressed into the main memory, the second offset from the mark file can be used to locate granule 176 within the uncompressed data.
ClickHouse needs to locate (and stream all values from) granule 176 from both the UserID.bin data file and the URL.bin data file in order to execute our example query (top 10 most clicked URLs for the internet user with the UserID 749.927.693).
The diagram above shows how ClickHouse is locating the granule for the UserID.bin data file.
In parallel, ClickHouse is doing the same for granule 176 for the URL.bin data file. The two respective granules are aligned and streamed into the ClickHouse engine for further processing i.e. aggregating and counting the URL values per group for all rows where the UserID is 749.927.693, before finally outputting the 10 largest URL groups in descending count order.
Using multiple primary indexes {#using-multiple-primary-indexes}
Secondary key columns can (not) be inefficient {#secondary-key-columns-can-not-be-inefficient}
When a query is filtering on a column that is part of a compound key and is the first key column,
then ClickHouse is running the binary search algorithm over the key column's index marks
.
But what happens when a query is filtering on a column that is part of a compound key, but is not the first key column?
:::note
We discuss a scenario when a query is explicitly not filtering on the first key column, but on a secondary key column.
When a query is filtering on both the first key column and on any key column(s) after the first then ClickHouse is running binary search over the first key column's index marks.
:::
We use a query that calculates the top 10 users that have most frequently clicked on the URL "http://public_search":
sql
SELECT UserID, count(UserID) AS Count
FROM hits_UserID_URL
WHERE URL = 'http://public_search'
GROUP BY UserID
ORDER BY Count DESC
LIMIT 10;
The response is:
```response
ββββββUserIDββ¬βCountββ
β 2459550954 β 3741 β
β 1084649151 β 2484 β
β 723361875 β 729 β
β 3087145896 β 695 β
β 2754931092 β 672 β
β 1509037307 β 582 β
β 3085460200 β 573 β
β 2454360090 β 556 β
β 3884990840 β 539 β
β 765730816 β 536 β
ββββββββββββββ΄ββββββββ
10 rows in set. Elapsed: 0.086 sec.
highlight-next-line
Processed 8.81 million rows,
799.69 MB (102.11 million rows/s., 9.27 GB/s.)
```
The client output indicates that ClickHouse almost executed a full table scan despite the
URL column being part of the compound primary key
! ClickHouse reads 8.81 million rows from the 8.87 million rows of the table. | {"source_file": "sparse-primary-indexes.md"} | [
0.03397727012634277,
0.012840122915804386,
-0.0565822459757328,
-0.03134375438094139,
-0.0508560873568058,
-0.055484455078840256,
-0.00437266705557704,
0.026139704510569572,
-0.04277731105685234,
-0.016304848715662956,
-0.011362336575984955,
0.012970540672540665,
0.05626562982797623,
-0.09... |
56491245-07fb-42be-bd21-1fb2669d3bdc | If
trace_logging
is enabled then the ClickHouse server log file shows that ClickHouse used a
generic exclusion search
over the 1083 URL index marks in order to identify those granules that possibly can contain rows with a URL column value of "http://public_search":
```response
...Executor): Key condition: (column 1 in ['http://public_search',
'http://public_search'])
highlight-next-line
...Executor): Used generic exclusion search over index for part all_1_9_2
with 1537 steps
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
highlight-next-line
1076/1083 marks by primary key, 1076 marks to read from 5 ranges
...Executor): Reading approx. 8814592 rows with 10 streams
```
We can see in the sample trace log above, that 1076 (via the marks) out of 1083 granules were selected as possibly containing rows with a matching URL value.
This results in 8.81 million rows being streamed into the ClickHouse engine (in parallel by using 10 streams), in order to identify the rows that are actually contain the URL value "http://public_search".
However, as we will see later, only 39 granules out of that selected 1076 granules actually contain matching rows.
Whilst the primary index based on the compound primary key (UserID, URL) was very useful for speeding up queries filtering for rows with a specific UserID value, the index is not providing significant help with speeding up the query that filters for rows with a specific URL value.
The reason for this is that the URL column is not the first key column and therefore ClickHouse is using a generic exclusion search algorithm (instead of binary search) over the URL column's index marks, and
the effectiveness of that algorithm is dependant on the cardinality difference
between the URL column and its predecessor key column UserID.
In order to illustrate that, we give some details about how the generic exclusion search works.
Generic exclusion search algorithm {#generic-exclusion-search-algorithm}
The following is illustrating how the
ClickHouse generic exclusion search algorithm
works when granules are selected via a secondary column where the predecessor key column has a low(er) or high(er) cardinality.
As an example for both cases we will assume:
- a query that is searching for rows with URL value = "W3".
- an abstract version of our hits table with simplified values for UserID and URL.
- the same compound primary key (UserID, URL) for the index. This means rows are first ordered by UserID values. Rows with the same UserID value are then ordered by URL.
- a granule size of two i.e. each granule contains two rows.
We have marked the key column values for the first table rows for each granule in orange in the diagrams below..
Predecessor key column has low(er) cardinality | {"source_file": "sparse-primary-indexes.md"} | [
-0.012739456258714199,
-0.01046308595687151,
-0.018568193539977074,
-0.0024677226319909096,
0.06973129510879517,
-0.0952124074101448,
0.018075089901685715,
-0.04727338254451752,
0.022037014365196228,
-0.009510433301329613,
0.012075353413820267,
0.0003959527239203453,
0.022567670792341232,
... |
935dd46e-8b44-4e46-9e62-9c438c68fbc8 | We have marked the key column values for the first table rows for each granule in orange in the diagrams below..
Predecessor key column has low(er) cardinality
Suppose UserID had low cardinality. In this case it would be likely that the same UserID value is spread over multiple table rows and granules and therefore index marks. For index marks with the same UserID, the URL values for the index marks are sorted in ascending order (because the table rows are ordered first by UserID and then by URL). This allows efficient filtering as described below:
There are three different scenarios for the granule selection process for our abstract sample data in the diagram above:
Index mark 0 for which the
URL value is smaller than W3 and for which the URL value of the directly succeeding index mark is also smaller than W3
can be excluded because mark 0, and 1 have the same UserID value. Note that this exclusion-precondition ensures that granule 0 is completely composed of U1 UserID values so that ClickHouse can assume that also the maximum URL value in granule 0 is smaller than W3 and exclude the granule.
Index mark 1 for which the
URL value is smaller (or equal) than W3 and for which the URL value of the directly succeeding index mark is greater (or equal) than W3
is selected because it means that granule 1 can possibly contain rows with URL W3.
Index marks 2 and 3 for which the
URL value is greater than W3
can be excluded, since index marks of a primary index store the key column values for the first table row for each granule and the table rows are sorted on disk by the key column values, therefore granule 2 and 3 can't possibly contain URL value W3.
Predecessor key column has high(er) cardinality
When the UserID has high cardinality then it is unlikely that the same UserID value is spread over multiple table rows and granules. This means the URL values for the index marks are not monotonically increasing:
As we can see in the diagram above, all shown marks whose URL values are smaller than W3 are getting selected for streaming its associated granule's rows into the ClickHouse engine.
This is because whilst all index marks in the diagram fall into scenario 1 described above, they do not satisfy the mentioned exclusion-precondition that
the directly succeeding index mark has the same UserID value as the current mark
and thus can't be excluded.
For example, consider index mark 0 for which the
URL value is smaller than W3 and for which the URL value of the directly succeeding index mark is also smaller than W3
. This can
not
be excluded because the directly succeeding index mark 1 does
not
have the same UserID value as the current mark 0.
This ultimately prevents ClickHouse from making assumptions about the maximum URL value in granule 0. Instead it has to assume that granule 0 potentially contains rows with URL value W3 and is forced to select mark 0. | {"source_file": "sparse-primary-indexes.md"} | [
0.023109300062060356,
0.04151662066578865,
0.0582556426525116,
-0.05380697920918465,
0.03200879693031311,
-0.055138859897851944,
0.07167350500822067,
-0.03403666242957115,
0.0420549251139164,
-0.06457088887691498,
-0.03651554882526398,
-0.005000247620046139,
0.062105245888233185,
-0.058056... |
608ed059-bedd-4c00-b1a3-c5ae1f324f5d | The same scenario is true for mark 1, 2, and 3.
:::note Conclusion
The
generic exclusion search algorithm
that ClickHouse is using instead of the
binary search algorithm
when a query is filtering on a column that is part of a compound key, but is not the first key column is most effective when the predecessor key column has low(er) cardinality.
:::
In our sample data set both key columns (UserID, URL) have similar high cardinality, and, as explained, the generic exclusion search algorithm is not very effective when the predecessor key column of the URL column has a high(er) or similar cardinality.
Note about data skipping index {#note-about-data-skipping-index}
Because of the similarly high cardinality of UserID and URL, our
query filtering on URL
also wouldn't benefit much from creating a
secondary data skipping index
on the URL column
of our
table with compound primary key (UserID, URL)
.
For example this two statements create and populate a
minmax
data skipping index on the URL column of our table:
sql
ALTER TABLE hits_UserID_URL ADD INDEX url_skipping_index URL TYPE minmax GRANULARITY 4;
ALTER TABLE hits_UserID_URL MATERIALIZE INDEX url_skipping_index;
ClickHouse now created an additional index that is storing - per group of 4 consecutive
granules
(note the
GRANULARITY 4
clause in the
ALTER TABLE
statement above) - the minimum and maximum URL value:
The first index entry ('mark 0' in the diagram above) is storing the minimum and maximum URL values for the
rows belonging to the first 4 granules of our table
.
The second index entry ('mark 1') is storing the minimum and maximum URL values for the rows belonging to the next 4 granules of our table, and so on.
(ClickHouse also created a special
mark file
for to the data skipping index for
locating
the groups of granules associated with the index marks.)
Because of the similarly high cardinality of UserID and URL, this secondary data skipping index can't help with excluding granules from being selected when our
query filtering on URL
is executed.
The specific URL value that the query is looking for (i.e. 'http://public_search') very likely is between the minimum and maximum value stored by the index for each group of granules resulting in ClickHouse being forced to select the group of granules (because they might contain row(s) matching the query).
A need to use multiple primary indexes {#a-need-to-use-multiple-primary-indexes}
As a consequence, if we want to significantly speed up our sample query that filters for rows with a specific URL then we need to use a primary index optimized to that query.
If in addition we want to keep the good performance of our sample query that filters for rows with a specific UserID then we need to use multiple primary indexes.
The following is showing ways for achieving that.
Options for creating additional primary indexes {#options-for-creating-additional-primary-indexes} | {"source_file": "sparse-primary-indexes.md"} | [
0.0073634665459394455,
0.006020834669470787,
0.030052954331040382,
-0.03572671487927437,
-0.0013561546802520752,
-0.02888789400458336,
-0.004604673013091087,
-0.04949820414185524,
0.01917850412428379,
-0.035464417189359665,
0.03074072301387787,
-0.0024194384459406137,
0.0779963880777359,
-... |
f68a1a18-831a-4904-b2ff-c236f718d647 | The following is showing ways for achieving that.
Options for creating additional primary indexes {#options-for-creating-additional-primary-indexes}
If we want to significantly speed up both of our sample queries - the one that filters for rows with a specific UserID and the one that filters for rows with a specific URL - then we need to use multiple primary indexes by using one of these three options:
Creating a
second table
with a different primary key.
Creating a
materialized view
on our existing table.
Adding a
projection
to our existing table.
All three options will effectively duplicate our sample data into a additional table in order to reorganize the table primary index and row sort order.
However, the three options differ in how transparent that additional table is to the user with respect to the routing of queries and insert statements.
When creating a
second table
with a different primary key then queries must be explicitly send to the table version best suited for the query, and new data must be inserted explicitly into both tables in order to keep the tables in sync:
With a
materialized view
the additional table is implicitly created and data is automatically kept in sync between both tables:
And the
projection
is the most transparent option because next to automatically keeping the implicitly created (and hidden) additional table in sync with data changes, ClickHouse will automatically choose the most effective table version for queries:
In the following we discuss this three options for creating and using multiple primary indexes in more detail and with real examples.
Option 1: Secondary Tables {#option-1-secondary-tables}
We are creating a new additional table where we switch the order of the key columns (compared to our original table) in the primary key:
sql
CREATE TABLE hits_URL_UserID
(
`UserID` UInt32,
`URL` String,
`EventTime` DateTime
)
ENGINE = MergeTree
-- highlight-next-line
PRIMARY KEY (URL, UserID)
ORDER BY (URL, UserID, EventTime)
SETTINGS index_granularity = 8192, index_granularity_bytes = 0, compress_primary_key = 0;
Insert all 8.87 million rows from our
original table
into the additional table:
sql
INSERT INTO hits_URL_UserID
SELECT * FROM hits_UserID_URL;
The response looks like:
```response
Ok.
0 rows in set. Elapsed: 2.898 sec. Processed 8.87 million rows, 838.84 MB (3.06 million rows/s., 289.46 MB/s.)
```
And finally optimize the table:
sql
OPTIMIZE TABLE hits_URL_UserID FINAL;
Because we switched the order of the columns in the primary key, the inserted rows are now stored on disk in a different lexicographical order (compared to our
original table
) and therefore also the 1083 granules of that table are containing different values than before:
This is the resulting primary key: | {"source_file": "sparse-primary-indexes.md"} | [
-0.05163013935089111,
-0.059596139937639236,
-0.021310955286026,
0.033722735941410065,
-0.04205898568034172,
0.007440801244229078,
-0.030228089541196823,
-0.05649353191256523,
0.026638485491275787,
0.05002472549676895,
-0.012760799378156662,
-0.001524728606455028,
0.024107785895466805,
-0.... |
4a842322-b590-4bc0-bf02-e2fd629f3d1c | This is the resulting primary key:
That can now be used to significantly speed up the execution of our example query filtering on the URL column in order to calculate the top 10 users that most frequently clicked on the URL "http://public_search":
sql
SELECT UserID, count(UserID) AS Count
-- highlight-next-line
FROM hits_URL_UserID
WHERE URL = 'http://public_search'
GROUP BY UserID
ORDER BY Count DESC
LIMIT 10;
The response is:
```response
ββββββUserIDββ¬βCountββ
β 2459550954 β 3741 β
β 1084649151 β 2484 β
β 723361875 β 729 β
β 3087145896 β 695 β
β 2754931092 β 672 β
β 1509037307 β 582 β
β 3085460200 β 573 β
β 2454360090 β 556 β
β 3884990840 β 539 β
β 765730816 β 536 β
ββββββββββββββ΄ββββββββ
10 rows in set. Elapsed: 0.017 sec.
highlight-next-line
Processed 319.49 thousand rows,
11.38 MB (18.41 million rows/s., 655.75 MB/s.)
```
Now, instead of
almost doing a full table scan
, ClickHouse executed that query much more effectively.
With the primary index from the
original table
where UserID was the first, and URL the second key column, ClickHouse used a
generic exclusion search
over the index marks for executing that query and that was not very effective because of the similarly high cardinality of UserID and URL.
With URL as the first column in the primary index, ClickHouse is now running
binary search
over the index marks.
The corresponding trace log in the ClickHouse server log file confirms that:
```response
...Executor): Key condition: (column 0 in ['http://public_search',
'http://public_search'])
highlight-next-line
...Executor): Running binary search on index range for part all_1_9_2 (1083 marks)
...Executor): Found (LEFT) boundary mark: 644
...Executor): Found (RIGHT) boundary mark: 683
...Executor): Found continuous range in 19 steps
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
highlight-next-line
39/1083 marks by primary key, 39 marks to read from 1 ranges
...Executor): Reading approx. 319488 rows with 2 streams
```
ClickHouse selected only 39 index marks, instead of 1076 when generic exclusion search was used.
Note that the additional table is optimized for speeding up the execution of our example query filtering on URLs.
Similar to the
bad performance
of that query with our
original table
, our
example query filtering on
UserIDs
will not run very effectively with the new additional table, because UserID is now the second key column in the primary index of that table and therefore ClickHouse will use generic exclusion search for granule selection, which is
not very effective for similarly high cardinality
of UserID and URL.
Open the details box for specifics.
Query filtering on UserIDs now has bad performance
```sql
SELECT URL, count(URL) AS Count
FROM hits_URL_UserID
WHERE UserID = 749927693
GROUP BY URL
ORDER BY Count DESC
LIMIT 10;
```
The response is: | {"source_file": "sparse-primary-indexes.md"} | [
-0.02088981866836548,
-0.03061932884156704,
-0.01692681945860386,
0.01208643801510334,
-0.008208436891436577,
-0.03429915010929108,
0.04967130720615387,
0.02212824672460556,
0.019640913233160973,
0.019988419488072395,
0.005768093280494213,
-0.011178361251950264,
0.05925821512937546,
-0.140... |
a7ca7a88-2237-4243-b7a5-b9627ef213d5 | ```sql
SELECT URL, count(URL) AS Count
FROM hits_URL_UserID
WHERE UserID = 749927693
GROUP BY URL
ORDER BY Count DESC
LIMIT 10;
```
The response is:
```response
ββURLβββββββββββββββββββββββββββββ¬βCountββ
β http://auto.ru/chatay-barana.. β 170 β
β http://auto.ru/chatay-id=371...β 52 β
β http://public_search β 45 β
β http://kovrik-medvedevushku-...β 36 β
β http://forumal β 33 β
β http://korablitz.ru/L_1OFFER...β 14 β
β http://auto.ru/chatay-id=371...β 14 β
β http://auto.ru/chatay-john-D...β 13 β
β http://auto.ru/chatay-john-D...β 10 β
β http://wot/html?page/23600_m...β 9 β
ββββββββββββββββββββββββββββββββββ΄ββββββββ
10 rows in set. Elapsed: 0.024 sec.
# highlight-next-line
Processed 8.02 million rows,
73.04 MB (340.26 million rows/s., 3.10 GB/s.)
```
Server Log:
```response
...Executor): Key condition: (column 1 in [749927693, 749927693])
# highlight-next-line
...Executor): Used generic exclusion search over index for part all_1_9_2
with 1453 steps
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
# highlight-next-line
980/1083 marks by primary key, 980 marks to read from 23 ranges
...Executor): Reading approx. 8028160 rows with 10 streams
```
We now have two tables. Optimized for speeding up queries filtering on
UserIDs
, and speeding up queries filtering on URLs, respectively:
Option 2: Materialized Views {#option-2-materialized-views}
Create a
materialized view
on our existing table.
sql
CREATE MATERIALIZED VIEW mv_hits_URL_UserID
ENGINE = MergeTree()
PRIMARY KEY (URL, UserID)
ORDER BY (URL, UserID, EventTime)
POPULATE
AS SELECT * FROM hits_UserID_URL;
The response looks like:
```response
Ok.
0 rows in set. Elapsed: 2.935 sec. Processed 8.87 million rows, 838.84 MB (3.02 million rows/s., 285.84 MB/s.)
```
:::note
- we switch the order of the key columns (compared to our
original table
) in the view's primary key
- the materialized view is backed by an
implicitly created table
whose row order and primary index are based on the given primary key definition
- the implicitly created table is listed by the
SHOW TABLES
query and has a name starting with
.inner
- it is also possible to first explicitly create the backing table for a materialized view and then the view can target that table via the
TO [db].[table]
clause
- we use the
POPULATE
keyword in order to immediately populate the implicitly created table with all 8.87 million rows from the source table
hits_UserID_URL
- if new rows are inserted into the source table hits_UserID_URL, then that rows are automatically also inserted into the implicitly created table
- Effectively the implicitly created table has the same row order and primary index as the
secondary table that we created explicitly
: | {"source_file": "sparse-primary-indexes.md"} | [
0.006926789414137602,
-0.069351427257061,
-0.004720146302133799,
0.044627320021390915,
-0.06225857883691788,
0.005121893715113401,
0.05564277991652489,
0.005847528111189604,
0.08616098016500473,
0.025562407448887825,
0.021788090467453003,
-0.03715805336833,
0.04239259287714958,
-0.04762871... |
b92ec92c-d7ab-4d39-bcc4-2a57358b807f | ClickHouse is storing the
column data files
(
.bin), the
mark files
(
.mrk2) and the
primary index
(primary.idx) of the implicitly created table in a special folder withing the ClickHouse server's data directory:
:::
The implicitly created table (and its primary index) backing the materialized view can now be used to significantly speed up the execution of our example query filtering on the URL column:
sql
SELECT UserID, count(UserID) AS Count
-- highlight-next-line
FROM mv_hits_URL_UserID
WHERE URL = 'http://public_search'
GROUP BY UserID
ORDER BY Count DESC
LIMIT 10;
The response is:
```response
ββββββUserIDββ¬βCountββ
β 2459550954 β 3741 β
β 1084649151 β 2484 β
β 723361875 β 729 β
β 3087145896 β 695 β
β 2754931092 β 672 β
β 1509037307 β 582 β
β 3085460200 β 573 β
β 2454360090 β 556 β
β 3884990840 β 539 β
β 765730816 β 536 β
ββββββββββββββ΄ββββββββ
10 rows in set. Elapsed: 0.026 sec.
highlight-next-line
Processed 335.87 thousand rows,
13.54 MB (12.91 million rows/s., 520.38 MB/s.)
```
Because effectively the implicitly created table (and its primary index) backing the materialized view is identical to the
secondary table that we created explicitly
, the query is executed in the same effective way as with the explicitly created table.
The corresponding trace log in the ClickHouse server log file confirms that ClickHouse is running binary search over the index marks:
```response
...Executor): Key condition: (column 0 in ['http://public_search',
'http://public_search'])
highlight-next-line
...Executor): Running binary search on index range ...
...
...Executor): Selected 4/4 parts by partition key, 4 parts by primary key,
highlight-next-line
41/1083 marks by primary key, 41 marks to read from 4 ranges
...Executor): Reading approx. 335872 rows with 4 streams
```
Option 3: Projections {#option-3-projections}
Create a projection on our existing table:
sql
ALTER TABLE hits_UserID_URL
ADD PROJECTION prj_url_userid
(
SELECT *
ORDER BY (URL, UserID)
);
And materialize the projection:
sql
ALTER TABLE hits_UserID_URL
MATERIALIZE PROJECTION prj_url_userid;
:::note
- the projection is creating a
hidden table
whose row order and primary index is based on the given
ORDER BY
clause of the projection
- the hidden table is not listed by the
SHOW TABLES
query
- we use the
MATERIALIZE
keyword in order to immediately populate the hidden table with all 8.87 million rows from the source table
hits_UserID_URL | {"source_file": "sparse-primary-indexes.md"} | [
-0.029493117704987526,
-0.048094913363456726,
-0.0677509754896164,
0.0764976367354393,
-0.009057816118001938,
-0.06413411349058151,
-0.0057091559283435345,
-0.062327489256858826,
0.02951662428677082,
0.026606254279613495,
0.08250133693218231,
0.008761530742049217,
0.0484071783721447,
-0.10... |
ab86f870-856f-4246-bd6c-efaad35d02cb | - if new rows are inserted into the source table hits_UserID_URL, then that rows are automatically also inserted into the hidden table
- a query is always (syntactically) targeting the source table hits_UserID_URL, but if the row order and primary index of the hidden table allows a more effective query execution, then that hidden table will be used instead
- please note that projections do not make queries that use ORDER BY more efficient, even if the ORDER BY matches the projection's ORDER BY statement (see https://github.com/ClickHouse/ClickHouse/issues/47333)
- Effectively the implicitly created hidden table has the same row order and primary index as the
secondary table that we created explicitly
:
ClickHouse is storing the
column data files
(
.bin), the
mark files
(
.mrk2) and the
primary index
(primary.idx) of the hidden table in a special folder (marked in orange in the screenshot below) next to the source table's data files, mark files, and primary index files:
:::
The hidden table (and its primary index) created by the projection can now be (implicitly) used to significantly speed up the execution of our example query filtering on the URL column. Note that the query is syntactically targeting the source table of the projection.
sql
SELECT UserID, count(UserID) AS Count
-- highlight-next-line
FROM hits_UserID_URL
WHERE URL = 'http://public_search'
GROUP BY UserID
ORDER BY Count DESC
LIMIT 10;
The response is:
```response
ββββββUserIDββ¬βCountββ
β 2459550954 β 3741 β
β 1084649151 β 2484 β
β 723361875 β 729 β
β 3087145896 β 695 β
β 2754931092 β 672 β
β 1509037307 β 582 β
β 3085460200 β 573 β
β 2454360090 β 556 β
β 3884990840 β 539 β
β 765730816 β 536 β
ββββββββββββββ΄ββββββββ
10 rows in set. Elapsed: 0.029 sec.
highlight-next-line
Processed 319.49 thousand rows, 1
1.38 MB (11.05 million rows/s., 393.58 MB/s.)
```
Because effectively the hidden table (and its primary index) created by the projection is identical to the
secondary table that we created explicitly
, the query is executed in the same effective way as with the explicitly created table.
The corresponding trace log in the ClickHouse server log file confirms that ClickHouse is running binary search over the index marks:
```response
...Executor): Key condition: (column 0 in ['http://public_search',
'http://public_search'])
highlight-next-line
...Executor): Running binary search on index range for part prj_url_userid (1083 marks)
...Executor): ...
highlight-next-line
...Executor): Choose complete Normal projection prj_url_userid
...Executor): projection required columns: URL, UserID
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
highlight-next-line
39/1083 marks by primary key, 39 marks to read from 1 ranges
...Executor): Reading approx. 319488 rows with 2 streams
```
Summary {#summary} | {"source_file": "sparse-primary-indexes.md"} | [
-0.10682723671197891,
0.009819621220231056,
-0.06612306088209152,
0.06385249644517899,
0.04092962294816971,
-0.02276666648685932,
0.014469574205577374,
-0.062201447784900665,
0.06444935500621796,
0.05374510586261749,
0.05740398168563843,
0.046132124960422516,
0.09257817268371582,
-0.129435... |
86ebfcf0-8725-4f03-b7f7-e6b7c7a7b1c6 | highlight-next-line
39/1083 marks by primary key, 39 marks to read from 1 ranges
...Executor): Reading approx. 319488 rows with 2 streams
```
Summary {#summary}
The primary index of our
table with compound primary key (UserID, URL)
was very useful for speeding up a
query filtering on UserID
. But that index is not providing significant help with speeding up a
query filtering on URL
, despite the URL column being part of the compound primary key.
And vice versa:
The primary index of our
table with compound primary key (URL, UserID)
was speeding up a
query filtering on URL
, but didn't provide much support for a
query filtering on UserID
.
Because of the similarly high cardinality of the primary key columns UserID and URL, a query that filters on the second key column
doesn't benefit much from the second key column being in the index
.
Therefore it makes sense to remove the second key column from the primary index (resulting in less memory consumption of the index) and to
use multiple primary indexes
instead.
However if the key columns in a compound primary key have big differences in cardinality, then it is
beneficial for queries
to order the primary key columns by cardinality in ascending order.
The higher the cardinality difference between the key columns is, the more the order of those columns in the key matters. We will demonstrate that in the next section.
Ordering key columns efficiently {#ordering-key-columns-efficiently}
In a compound primary key the order of the key columns can significantly influence both:
- the efficiency of the filtering on secondary key columns in queries, and
- the compression ratio for the table's data files.
In order to demonstrate that, we will use a version of our
web traffic sample data set
where each row contains three columns that indicate whether or not the access by an internet 'user' (
UserID
column) to a URL (
URL
column) got marked as bot traffic (
IsRobot
column).
We will use a compound primary key containing all three aforementioned columns that could be used to speed up typical web analytics queries that calculate
- how much (percentage of) traffic to a specific URL is from bots or
- how confident we are that a specific user is (not) a bot (what percentage of traffic from that user is (not) assumed to be bot traffic)
We use this query for calculating the cardinalities of the three columns that we want to use as key columns in a compound primary key (note that we are using the
URL table function
for querying TSV data ad hoc without having to create a local table). Run this query in
clickhouse client
: | {"source_file": "sparse-primary-indexes.md"} | [
0.013201690278947353,
0.054666224867105484,
0.005498996004462242,
-0.030469216406345367,
0.023116478696465492,
-0.032277513295412064,
-0.005489179864525795,
-0.028739910572767258,
0.07653304189443588,
-0.05216321349143982,
-0.018407069146633148,
-0.014399277977645397,
-0.01800471916794777,
... |
9e419f03-37f7-4605-8021-93c405311a25 | sql
SELECT
formatReadableQuantity(uniq(URL)) AS cardinality_URL,
formatReadableQuantity(uniq(UserID)) AS cardinality_UserID,
formatReadableQuantity(uniq(IsRobot)) AS cardinality_IsRobot
FROM
(
SELECT
c11::UInt64 AS UserID,
c15::String AS URL,
c20::UInt8 AS IsRobot
FROM url('https://datasets.clickhouse.com/hits/tsv/hits_v1.tsv.xz')
WHERE URL != ''
)
The response is:
```response
ββcardinality_URLββ¬βcardinality_UserIDββ¬βcardinality_IsRobotββ
β 2.39 million β 119.08 thousand β 4.00 β
βββββββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββββββββββββ
1 row in set. Elapsed: 118.334 sec. Processed 8.87 million rows, 15.88 GB (74.99 thousand rows/s., 134.21 MB/s.)
```
We can see that there is a big difference between the cardinalities, especially between the
URL
and
IsRobot
columns, and therefore the order of these columns in a compound primary key is significant for both the efficient speed up of queries filtering on that columns and for achieving optimal compression ratios for the table's column data files.
In order to demonstrate that we are creating two table versions for our bot traffic analysis data:
- a table
hits_URL_UserID_IsRobot
with the compound primary key
(URL, UserID, IsRobot)
where we order the key columns by cardinality in descending order
- a table
hits_IsRobot_UserID_URL
with the compound primary key
(IsRobot, UserID, URL)
where we order the key columns by cardinality in ascending order
Create the table
hits_URL_UserID_IsRobot
with the compound primary key
(URL, UserID, IsRobot)
:
sql
CREATE TABLE hits_URL_UserID_IsRobot
(
`UserID` UInt32,
`URL` String,
`IsRobot` UInt8
)
ENGINE = MergeTree
-- highlight-next-line
PRIMARY KEY (URL, UserID, IsRobot);
And populate it with 8.87 million rows:
sql
INSERT INTO hits_URL_UserID_IsRobot SELECT
intHash32(c11::UInt64) AS UserID,
c15 AS URL,
c20 AS IsRobot
FROM url('https://datasets.clickhouse.com/hits/tsv/hits_v1.tsv.xz')
WHERE URL != '';
This is the response:
response
0 rows in set. Elapsed: 104.729 sec. Processed 8.87 million rows, 15.88 GB (84.73 thousand rows/s., 151.64 MB/s.)
Next, create the table
hits_IsRobot_UserID_URL
with the compound primary key
(IsRobot, UserID, URL)
:
sql
CREATE TABLE hits_IsRobot_UserID_URL
(
`UserID` UInt32,
`URL` String,
`IsRobot` UInt8
)
ENGINE = MergeTree
-- highlight-next-line
PRIMARY KEY (IsRobot, UserID, URL);
And populate it with the same 8.87 million rows that we used to populate the previous table:
sql
INSERT INTO hits_IsRobot_UserID_URL SELECT
intHash32(c11::UInt64) AS UserID,
c15 AS URL,
c20 AS IsRobot
FROM url('https://datasets.clickhouse.com/hits/tsv/hits_v1.tsv.xz')
WHERE URL != '';
The response is:
response
0 rows in set. Elapsed: 95.959 sec. Processed 8.87 million rows, 15.88 GB (92.48 thousand rows/s., 165.50 MB/s.) | {"source_file": "sparse-primary-indexes.md"} | [
0.05901200696825981,
-0.0388614796102047,
-0.051519546657800674,
0.04765577241778374,
-0.07897619158029556,
-0.05656718462705612,
-0.022247519344091415,
0.03423338383436203,
0.042725395411252975,
0.06726779043674469,
0.0003485116467345506,
-0.02823810838162899,
0.04594673961400986,
-0.0849... |
9bbd77e1-a0a3-473f-b0d0-cc098108e237 | The response is:
response
0 rows in set. Elapsed: 95.959 sec. Processed 8.87 million rows, 15.88 GB (92.48 thousand rows/s., 165.50 MB/s.)
Efficient filtering on secondary key columns {#efficient-filtering-on-secondary-key-columns}
When a query is filtering on at least one column that is part of a compound key, and is the first key column,
then ClickHouse is running the binary search algorithm over the key column's index marks
.
When a query is filtering (only) on a column that is part of a compound key, but is not the first key column,
then ClickHouse is using the generic exclusion search algorithm over the key column's index marks
.
For the second case the ordering of the key columns in the compound primary key is significant for the effectiveness of the
generic exclusion search algorithm
.
This is a query that is filtering on the
UserID
column of the table where we ordered the key columns
(URL, UserID, IsRobot)
by cardinality in descending order:
sql
SELECT count(*)
FROM hits_URL_UserID_IsRobot
WHERE UserID = 112304
The response is:
```response
ββcount()ββ
β 73 β
βββββββββββ
1 row in set. Elapsed: 0.026 sec.
highlight-next-line
Processed 7.92 million rows,
31.67 MB (306.90 million rows/s., 1.23 GB/s.)
```
This is the same query on the table where we ordered the key columns
(IsRobot, UserID, URL)
by cardinality in ascending order:
sql
SELECT count(*)
FROM hits_IsRobot_UserID_URL
WHERE UserID = 112304
The response is:
```response
ββcount()ββ
β 73 β
βββββββββββ
1 row in set. Elapsed: 0.003 sec.
highlight-next-line
Processed 20.32 thousand rows,
81.28 KB (6.61 million rows/s., 26.44 MB/s.)
```
We can see that the query execution is significantly more effective and faster on the table where we ordered the key columns by cardinality in ascending order.
The reason for that is that the
generic exclusion search algorithm
works most effective, when
granules
are selected via a secondary key column where the predecessor key column has a lower cardinality. We illustrated that in detail in a
previous section
of this guide.
Optimal compression ratio of data files {#optimal-compression-ratio-of-data-files}
This query compares the compression ratio of the
UserID
column between the two tables that we created above:
sql
SELECT
table AS Table,
name AS Column,
formatReadableSize(data_uncompressed_bytes) AS Uncompressed,
formatReadableSize(data_compressed_bytes) AS Compressed,
round(data_uncompressed_bytes / data_compressed_bytes, 0) AS Ratio
FROM system.columns
WHERE (table = 'hits_URL_UserID_IsRobot' OR table = 'hits_IsRobot_UserID_URL') AND (name = 'UserID')
ORDER BY Ratio ASC | {"source_file": "sparse-primary-indexes.md"} | [
0.01939740590751171,
0.020108165219426155,
-0.05339557304978371,
0.004213368520140648,
0.031496401876211166,
-0.10974746942520142,
0.004269157070666552,
-0.022924037650227547,
0.023519165813922882,
-0.00965241901576519,
0.0341007336974144,
0.014788530766963959,
0.059924278408288956,
-0.143... |
434c7d9f-5abb-447e-80ce-cc2ec8ac0577 | This is the response:
```response
ββTableββββββββββββββββββββ¬βColumnββ¬βUncompressedββ¬βCompressedββ¬βRatioββ
β hits_URL_UserID_IsRobot β UserID β 33.83 MiB β 11.24 MiB β 3 β
β hits_IsRobot_UserID_URL β UserID β 33.83 MiB β 877.47 KiB β 39 β
βββββββββββββββββββββββββββ΄βββββββββ΄βββββββββββββββ΄βββββββββββββ΄ββββββββ
2 rows in set. Elapsed: 0.006 sec.
``
We can see that the compression ratio for the
UserID
column is significantly higher for the table where we ordered the key columns
(IsRobot, UserID, URL)` by cardinality in ascending order.
Although in both tables exactly the same data is stored (we inserted the same 8.87 million rows into both tables), the order of the key columns in the compound primary key has a significant influence on how much disk space the
compressed
data in the table's
column data files
requires:
- in the table
hits_URL_UserID_IsRobot
with the compound primary key
(URL, UserID, IsRobot)
where we order the key columns by cardinality in descending order, the
UserID.bin
data file takes
11.24 MiB
of disk space
- in the table
hits_IsRobot_UserID_URL
with the compound primary key
(IsRobot, UserID, URL)
where we order the key columns by cardinality in ascending order, the
UserID.bin
data file takes only
877.47 KiB
of disk space
Having a good compression ratio for the data of a table's column on disk not only saves space on disk, but also makes queries (especially analytical ones) that require the reading of data from that column faster, as less i/o is required for moving the column's data from disk to the main memory (the operating system's file cache).
In the following we illustrate why it's beneficial for the compression ratio of a table's columns to order the primary key columns by cardinality in ascending order.
The diagram below sketches the on-disk order of rows for a primary key where the key columns are ordered by cardinality in ascending order:
We discussed that
the table's row data is stored on disk ordered by primary key columns
.
In the diagram above, the table's rows (their column values on disk) are first ordered by their
cl
value, and rows that have the same
cl
value are ordered by their
ch
value. And because the first key column
cl
has low cardinality, it is likely that there are rows with the same
cl
value. And because of that it is also likely that
ch
values are ordered (locally - for rows with the same
cl
value).
If in a column, similar data is placed close to each other, for example via sorting, then that data will be compressed better.
In general, a compression algorithm benefits from the run length of data (the more data it sees the better for compression)
and locality (the more similar the data is, the better the compression ratio is).
In contrast to the diagram above, the diagram below sketches the on-disk order of rows for a primary key where the key columns are ordered by cardinality in descending order: | {"source_file": "sparse-primary-indexes.md"} | [
-0.002390448236837983,
-0.011070813052356243,
-0.0842941477894783,
0.01933257095515728,
0.011615459807217121,
-0.11397822201251984,
-0.08145976066589355,
0.04141684249043465,
0.02769809402525425,
0.07532712817192078,
0.007752840872853994,
0.025656649842858315,
0.060360100120306015,
-0.0850... |
64f4ba23-dbbd-4940-97e8-790b8ed4c02d | In contrast to the diagram above, the diagram below sketches the on-disk order of rows for a primary key where the key columns are ordered by cardinality in descending order:
Now the table's rows are first ordered by their
ch
value, and rows that have the same
ch
value are ordered by their
cl
value.
But because the first key column
ch
has high cardinality, it is unlikely that there are rows with the same
ch
value. And because of that is is also unlikely that
cl
values are ordered (locally - for rows with the same
ch
value).
Therefore the
cl
values are most likely in random order and therefore have a bad locality and compression ration, respectively.
Summary {#summary-1}
For both the efficient filtering on secondary key columns in queries and the compression ratio of a table's column data files it is beneficial to order the columns in a primary key by their cardinality in ascending order.
Identifying single rows efficiently {#identifying-single-rows-efficiently}
Although in general it is
not
the best use case for ClickHouse,
sometimes applications built on top of ClickHouse require to identify single rows of a ClickHouse table.
An intuitive solution for that might be to use a
UUID
column with a unique value per row and for fast retrieval of rows to use that column as a primary key column.
For the fastest retrieval, the UUID column
would need to be the first key column
.
We discussed that because
a ClickHouse table's row data is stored on disk ordered by primary key column(s)
, having a very high cardinality column (like a UUID column) in a primary key or in a compound primary key before columns with lower cardinality
is detrimental for the compression ratio of other table columns
.
A compromise between fastest retrieval and optimal data compression is to use a compound primary key where the UUID is the last key column, after low(er) cardinality key columns that are used to ensure a good compression ratio for some of the table's columns.
A concrete example {#a-concrete-example}
One concrete example is a the plaintext paste service
https://pastila.nl
that Alexey Milovidov developed and
blogged about
.
On every change to the text-area, the data is saved automatically into a ClickHouse table row (one row per change).
And one way to identify and retrieve (a specific version of) the pasted content is to use a hash of the content as the UUID for the table row that contains the content.
The following diagram shows
- the insert order of rows when the content changes (for example because of keystrokes typing the text into the text-area) and
- the on-disk order of the data from the inserted rows when the
PRIMARY KEY (hash)
is used: | {"source_file": "sparse-primary-indexes.md"} | [
-0.026744650676846504,
0.028801489621400833,
-0.012662135995924473,
-0.06312308460474014,
0.06815672665834427,
-0.06363561004400253,
-0.005194548051804304,
-0.04676690697669983,
0.1069045439362526,
0.005357626359909773,
-0.009551932103931904,
0.07424408942461014,
0.058012738823890686,
-0.1... |
66484742-182d-4923-af4b-5c42f3fdca93 | Because the
hash
column is used as the primary key column
- specific rows can be retrieved
very quickly
, but
- the table's rows (their column data) are stored on disk ordered ascending by (the unique and random) hash values. Therefore also the content column's values are stored in random order with no data locality resulting in a
suboptimal compression ratio for the content column data file
.
In order to significantly improve the compression ratio for the content column while still achieving fast retrieval of specific rows, pastila.nl is using two hashes (and a compound primary key) for identifying a specific row:
- a hash of the content, as discussed above, that is distinct for distinct data, and
- a
locality-sensitive hash (fingerprint)
that does
not
change on small changes of data.
The following diagram shows
- the insert order of rows when the content changes (for example because of keystrokes typing the text into the text-area) and
- the on-disk order of the data from the inserted rows when the compound
PRIMARY KEY (fingerprint, hash)
is used:
Now the rows on disk are first ordered by
fingerprint
, and for rows with the same fingerprint value, their
hash
value determines the final order.
Because data that differs only in small changes is getting the same fingerprint value, similar data is now stored on disk close to each other in the content column. And that is very good for the compression ratio of the content column, as a compression algorithm in general benefits from data locality (the more similar the data is the better the compression ratio is).
The compromise is that two fields (
fingerprint
and
hash
) are required for the retrieval of a specific row in order to optimally utilise the primary index that results from the compound
PRIMARY KEY (fingerprint, hash)
. | {"source_file": "sparse-primary-indexes.md"} | [
-0.018135488033294678,
0.04287270829081535,
-0.03045014850795269,
-0.07142333686351776,
0.06279915571212769,
-0.00863632746040821,
-0.04980448633432388,
-0.010556291788816452,
0.1607295125722885,
0.0754794180393219,
0.031031982973217964,
0.1227181926369667,
-0.029498113319277763,
-0.053373... |
f6b82554-d4e9-46b6-abe9-b5594059a7ed | slug: /optimize/query-optimization
sidebar_label: 'Query optimization'
title: 'Guide for Query optimization'
description: 'A simple guide for query optimization that describe common path to improve query performance'
doc_type: 'guide'
keywords: ['query optimization', 'performance', 'best practices', 'query tuning', 'efficiency']
import queryOptimizationDiagram1 from '@site/static/images/guides/best-practices/query_optimization_diagram_1.png';
import Image from '@theme/IdealImage';
A simple guide for query optimization
This section aims to illustrate through common scenarios how to use different performance and optimization techniques, such as
analyzer
,
query profiling
or
avoid nullable Columns
, in order to improve your ClickHouse query performances.
Understand query performance {#understand-query-performance}
The best moment to think about performance optimization is when you're setting up your
data schema
before ingesting data into ClickHouse for the first time.Β
But let's be honest; it is difficult to predict how much your data will grow or what types of queries will be executed.Β
If you have an existing deployment with a few queries that you want to improve, the first step is understanding how those queries perform and why some execute in a few milliseconds while others take longer.
ClickHouse has a rich set of tools to help you understand how your query is getting executed and the resources consumed to perform the execution.Β
In this section, we will look at those tools and how to use them.Β
General considerations {#general-considerations}
To understand query performance, let's look at what happens in ClickHouse when a query is executed.Β
The following part is deliberately simplified and takes some shortcuts; the idea here is not to drown you with details but to get you up to speed with the basic concepts. For more information you can read about
query analyzer
.Β
From a very high-level standpoint, when ClickHouse executes a query, the following happens:Β
Query parsing and analysis
The query is parsed and analyzed, and a generic query execution plan is created.Β
Query optimization
The query execution plan is optimized, unnecessary data is pruned, and a query pipeline is built from the query plan.Β
Query pipeline execution
The data is read and processed in parallel. This is the stage where ClickHouse actually executes the query operations such as filtering, aggregations, and sorting.Β
Final processing
The results are merged, sorted, and formatted into a final result before being sent to the client.
In reality, many
optimizations
are taking place, and we will discuss them a bit more in this guide, but for now, those main concepts give us a good understanding of what is happening behind the scenes when ClickHouse executes a query. | {"source_file": "query-optimization.md"} | [
0.02488396316766739,
0.04571940377354622,
-0.04978534206748009,
0.06089426577091217,
-0.027385815978050232,
-0.07798305153846741,
0.014665466733276844,
0.04426310956478119,
-0.09905830770730972,
-0.026751792058348656,
-0.014146034605801105,
0.039541710168123245,
0.09834425896406174,
-0.026... |
e01e343f-e7a7-42e0-8fe5-069558052c6c | With this high-level understanding, let's examine the tooling ClickHouse provides and how we can use it to track the metrics that affect query performance.Β
Dataset {#dataset}
We'll use a real example to illustrate how we approach query performances.Β
Let's use the NYC Taxi dataset, which contains taxi ride data in NYC. First, we start by ingesting the NYC taxi dataset with no optimization.
Below is the command to create the table and insert data from an S3 bucket. Note that we infer the schema from the data voluntarily, which is not optimized.
```sql
-- Create table with inferred schema
CREATE TABLE trips_small_inferred
ORDER BY () EMPTY
AS SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/clickhouse-academy/nyc_taxi_2009-2010.parquet');
-- Insert data into table with inferred schema
INSERT INTO trips_small_inferred
SELECT *
FROM s3Cluster
('default','https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/clickhouse-academy/nyc_taxi_2009-2010.parquet');
```
Let's have a look to the table schema automatically inferred from the data.
```sql
--- Display inferred table schema
SHOW CREATE TABLE trips_small_inferred
Query id: d97361fd-c050-478e-b831-369469f0784d
CREATE TABLE nyc_taxi.trips_small_inferred
(
vendor_id
Nullable(String),
pickup_datetime
Nullable(DateTime64(6, 'UTC')),
dropoff_datetime
Nullable(DateTime64(6, 'UTC')),
passenger_count
Nullable(Int64),
trip_distance
Nullable(Float64),
ratecode_id
Nullable(String),
pickup_location_id
Nullable(String),
dropoff_location_id
Nullable(String),
payment_type
Nullable(Int64),
fare_amount
Nullable(Float64),
extra
Nullable(Float64),
mta_tax
Nullable(Float64),
tip_amount
Nullable(Float64),
tolls_amount
Nullable(Float64),
total_amount
Nullable(Float64)
)
ORDER BY tuple()
```
Spot the slow queries {#spot-the-slow-queries}
Query logs {#query-logs}
By default, ClickHouse collects and logs information about each executed query in the
query logs
. This data is stored in the table
system.query_log
.Β
For each executed query, ClickHouse logs statistics such as query execution time, number of rows read, and resource usage, such as CPU, memory usage, or filesystem cache hits.Β
Therefore, the query log is a good place to start when investigating slow queries. You can easily spot the queries that take a long time to execute and display the resource usage information for each one.Β
Let's find the top five long-running queries on our NYC taxi dataset.
```sql
-- Find top 5 long running queries from nyc_taxi database in the last 1 hour
SELECT
type,
event_time,
query_duration_ms,
query,
read_rows,
tables
FROM clusterAllReplicas(default, system.query_log)
WHERE has(databases, 'nyc_taxi') AND (event_time >= (now() - toIntervalMinute(60))) AND type='QueryFinish'
ORDER BY query_duration_ms DESC
LIMIT 5
FORMAT VERTICAL | {"source_file": "query-optimization.md"} | [
0.025845475494861603,
-0.08144160360097885,
-0.04485420510172844,
0.02825569175183773,
-0.01411157101392746,
-0.012888059951364994,
0.018985562026500702,
-0.039843227714300156,
0.00683348486199975,
0.032594967633485794,
0.02465861476957798,
-0.04896058887243271,
0.05616122856736183,
-0.081... |
bf85a288-1fa9-4079-8b74-de41222f5f1c | Query id: e3d48c9f-32bb-49a4-8303-080f59ed1835
Row 1:
ββββββ
type: QueryFinish
event_time: 2024-11-27 11:12:36
query_duration_ms: 2967
query: WITH
dateDiff('s', pickup_datetime, dropoff_datetime) as trip_time,
trip_distance / trip_time * 3600 AS speed_mph
SELECT
quantiles(0.5, 0.75, 0.9, 0.99)(trip_distance)
FROM
nyc_taxi.trips_small_inferred
WHERE
speed_mph > 30
FORMAT JSON
read_rows: 329044175
tables: ['nyc_taxi.trips_small_inferred']
Row 2:
ββββββ
type: QueryFinish
event_time: 2024-11-27 11:11:33
query_duration_ms: 2026
query: SELECT
payment_type,
COUNT() AS trip_count,
formatReadableQuantity(SUM(trip_distance)) AS total_distance,
AVG(total_amount) AS total_amount_avg,
AVG(tip_amount) AS tip_amount_avg
FROM
nyc_taxi.trips_small_inferred
WHERE
pickup_datetime >= '2009-01-01' AND pickup_datetime < '2009-04-01'
GROUP BY
payment_type
ORDER BY
trip_count DESC;
read_rows: 329044175
tables: ['nyc_taxi.trips_small_inferred']
Row 3:
ββββββ
type: QueryFinish
event_time: 2024-11-27 11:12:17
query_duration_ms: 1860
query: SELECT
avg(dateDiff('s', pickup_datetime, dropoff_datetime))
FROM nyc_taxi.trips_small_inferred
WHERE passenger_count = 1 or passenger_count = 2
FORMAT JSON
read_rows: 329044175
tables: ['nyc_taxi.trips_small_inferred']
Row 4:
ββββββ
type: QueryFinish
event_time: 2024-11-27 11:12:31
query_duration_ms: 690
query: SELECT avg(total_amount) FROM nyc_taxi.trips_small_inferred WHERE trip_distance > 5
FORMAT JSON
read_rows: 329044175
tables: ['nyc_taxi.trips_small_inferred']
Row 5:
ββββββ
type: QueryFinish
event_time: 2024-11-27 11:12:44
query_duration_ms: 634
query: SELECT
vendor_id,
avg(total_amount),
avg(trip_distance),
FROM
nyc_taxi.trips_small_inferred
GROUP BY vendor_id
ORDER BY 1 DESC
FORMAT JSON
read_rows: 329044175
tables: ['nyc_taxi.trips_small_inferred']
```
The field
query_duration_ms
indicates how long it took for that particular query to execute. Looking at the results from the query logs, we can see that the first query is taking 2967ms to run, which could be improved.Β
You might also want to know which queries are stressing the system by examining the query that consumes the most memory or CPU. | {"source_file": "query-optimization.md"} | [
0.040814634412527084,
0.04668862372636795,
0.005373885855078697,
0.09005189687013626,
-0.056419629603624344,
-0.020528597757220268,
0.037294913083314896,
0.012289985083043575,
-0.0146331787109375,
-0.01676803268492222,
0.0499059222638607,
-0.11720075458288193,
-0.05616014823317528,
0.01047... |
7de94426-41f5-4254-a367-3dc914c7f296 | You might also want to know which queries are stressing the system by examining the query that consumes the most memory or CPU.Β
sql
-- Top queries by memory usage
SELECT
type,
event_time,
query_id,
formatReadableSize(memory_usage) AS memory,
ProfileEvents.Values[indexOf(ProfileEvents.Names, 'UserTimeMicroseconds')] AS userCPU,
ProfileEvents.Values[indexOf(ProfileEvents.Names, 'SystemTimeMicroseconds')] AS systemCPU,
(ProfileEvents['CachedReadBufferReadFromCacheMicroseconds']) / 1000000 AS FromCacheSeconds,
(ProfileEvents['CachedReadBufferReadFromSourceMicroseconds']) / 1000000 AS FromSourceSeconds,
normalized_query_hash
FROM clusterAllReplicas(default, system.query_log)
WHERE has(databases, 'nyc_taxi') AND (type='QueryFinish') AND ((event_time >= (now() - toIntervalDay(2))) AND (event_time <= now())) AND (user NOT ILIKE '%internal%')
ORDER BY memory_usage DESC
LIMIT 30
Let's isolate the long-running queries we found and rerun them a few times to understand the response time.Β
At this point, it is essential to turn off the filesystem cache by setting the
enable_filesystem_cache
setting to 0 to improve reproducibility.
```sql
-- Disable filesystem cache
set enable_filesystem_cache = 0;
-- Run query 1
WITH
dateDiff('s', pickup_datetime, dropoff_datetime) as trip_time,
trip_distance / trip_time * 3600 AS speed_mph
SELECT
quantiles(0.5, 0.75, 0.9, 0.99)(trip_distance)
FROM
nyc_taxi.trips_small_inferred
WHERE
speed_mph > 30
FORMAT JSON
1 row in set. Elapsed: 1.699 sec. Processed 329.04 million rows, 8.88 GB (193.72 million rows/s., 5.23 GB/s.)
Peak memory usage: 440.24 MiB.
-- Run query 2
SELECT
payment_type,
COUNT() AS trip_count,
formatReadableQuantity(SUM(trip_distance)) AS total_distance,
AVG(total_amount) AS total_amount_avg,
AVG(tip_amount) AS tip_amount_avg
FROM
nyc_taxi.trips_small_inferred
WHERE
pickup_datetime >= '2009-01-01' AND pickup_datetime < '2009-04-01'
GROUP BY
payment_type
ORDER BY
trip_count DESC;
4 rows in set. Elapsed: 1.419 sec. Processed 329.04 million rows, 5.72 GB (231.86 million rows/s., 4.03 GB/s.)
Peak memory usage: 546.75 MiB.
-- Run query 3
SELECT
avg(dateDiff('s', pickup_datetime, dropoff_datetime))
FROM nyc_taxi.trips_small_inferred
WHERE passenger_count = 1 or passenger_count = 2
FORMAT JSON
1 row in set. Elapsed: 1.414 sec. Processed 329.04 million rows, 8.88 GB (232.63 million rows/s., 6.28 GB/s.)
Peak memory usage: 451.53 MiB.
```
Summarize in the table for easy reading.
| Name | Elapsed | Rows processed | Peak memory |
| ------- | --------- | -------------- | ----------- |
| Query 1 | 1.699 sec | 329.04 million | 440.24 MiB |
| Query 2 | 1.419 sec | 329.04 million | 546.75 MiB |
| Query 3 | 1.414 sec | 329.04 million | 451.53 MiB |
Let's understand a bit better what the queries achieve. | {"source_file": "query-optimization.md"} | [
0.14532186090946198,
-0.05238448455929756,
-0.08979310095310211,
0.10538600385189056,
0.011882473714649677,
-0.09340005367994308,
0.12720459699630737,
0.050568025559186935,
-0.06452964246273041,
0.014999938197433949,
-0.06601210683584213,
-0.038337960839271545,
-0.024973852559924126,
-0.05... |
94c4145f-f7c8-4af3-8ff5-8fc613e4466c | Let's understand a bit better what the queries achieve.Β
Query 1 calculates the distance distribution in rides with an average speed of over 30 miles per hour.
Query 2 finds the number and average cost of rides per week.Β
Query 3 calculates the average time of each trip in the dataset.
None of these queries are doing very complex processing, except the first query that calculates the trip time on the fly every time the query executes. However, each of these queries takes more than one second to execute, which, in the ClickHouse world, is a very long time. We can also note the memory usage of these queries; more or less 400 Mb for each query is quite a lot of memory. Also, each query appears to read the same number of rows (i.e., 329.04 million). Let's quickly confirm how many rows are in this table.
```sql
-- Count number of rows in table
SELECT count()
FROM nyc_taxi.trips_small_inferred
Query id: 733372c5-deaf-4719-94e3-261540933b23
ββββcount()ββ
1. β 329044175 β -- 329.04 million
βββββββββββββ
```
The table contains 329.04 million rows, therefore each query is doing a full scan of the table.
Explain statement {#explain-statement}
Now that we have some long-running queries, let's understand how they are executed. For this, ClickHouse supports the
EXPLAIN statement command
. It is a very useful tool that provides a very detailed view of all the query execution stages without actually running the query. While it can be overwhelming to look at for a non-ClickHouse expert, it remains an essential tool for gaining insight into how your query is executed.
The documentation provides a detailed
guide
on what the EXPLAIN statement is and how to use it to analyze your query execution. Rather than repeating what is in this guide, let's focus on a few commands that will help us find bottlenecks in query execution performance.Β
Explain indexes = 1
Let's start with EXPLAIN indexes = 1 to inspect the query plan. The query plan is a tree showing how the query will be executed. There, you can see in which order the clauses from the query will be executed. The query plan returned by the EXPLAIN statement can be read from bottom to top.
Let's try using the first of our long-running queries.
```sql
EXPLAIN indexes = 1
WITH
dateDiff('s', pickup_datetime, dropoff_datetime) AS trip_time,
(trip_distance / trip_time) * 3600 AS speed_mph
SELECT quantiles(0.5, 0.75, 0.9, 0.99)(trip_distance)
FROM nyc_taxi.trips_small_inferred
WHERE speed_mph > 30
Query id: f35c412a-edda-4089-914b-fa1622d69868
ββexplainββββββββββββββββββββββββββββββββββββββββββββββ
1. β Expression ((Projection + Before ORDER BY)) β
2. β Aggregating β
3. β Expression (Before GROUP BY) β
4. β Filter (WHERE) β
5. β ReadFromMergeTree (nyc_taxi.trips_small_inferred) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | {"source_file": "query-optimization.md"} | [
0.07625853270292282,
-0.05557221546769142,
0.002533514052629471,
0.11450717598199844,
-0.04621712118387222,
-0.08205554634332657,
0.0641016736626625,
0.003925584256649017,
-0.0009239829960279167,
0.003548053791746497,
-0.004444189369678497,
-0.022806700319051743,
0.021729685366153717,
-0.0... |
8a97aacb-38a6-45b2-95c3-55409d6576ce | The output is straightforward. The query begins by reading data from the
nyc_taxi.trips_small_inferred
table. Then, the WHERE clause is applied to filter rows based on computed values. The filtered data is prepared for aggregation, and the quantiles are computed. Finally, the result is sorted and outputted.Β
Here, we can note that no primary keys are used, which makes sense as we didn't define any when we created the table. As a result, ClickHouse is doing a full scan of the table for the query.Β
Explain Pipeline
EXPLAIN Pipeline shows the concrete execution strategy for the query. There, you can see how ClickHouse actually executed the generic query plan we looked at previously.
```sql
EXPLAIN PIPELINE
WITH
dateDiff('s', pickup_datetime, dropoff_datetime) AS trip_time,
(trip_distance / trip_time) * 3600 AS speed_mph
SELECT quantiles(0.5, 0.75, 0.9, 0.99)(trip_distance)
FROM nyc_taxi.trips_small_inferred
WHERE speed_mph > 30
Query id: c7e11e7b-d970-4e35-936c-ecfc24e3b879
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β (Expression) β
β ExpressionTransform Γ 59 β
β (Aggregating) β
β Resize 59 β 59 β
β AggregatingTransform Γ 59 β
β StrictResize 59 β 59 β
β (Expression) β
β ExpressionTransform Γ 59 β
β (Filter) β
β FilterTransform Γ 59 β
β (ReadFromMergeTree) β
β MergeTreeSelect(pool: PrefetchedReadPool, algorithm: Thread) Γ 59 0 β 1 β
```
Here, we can note the number of threads used to execute the query: 59 threads, which indicates a high parallelization. This speeds up the query, which would take longer to execute on a smaller machine. The number of threads running in parallel can explain the high volume of memory the query uses.Β
Ideally, you would investigate all your slow queries the same way to identify unnecessary complex query plans and understand the number of rows read by each query and the resources consumed.
Methodology {#methodology}
It can be difficult to identify problematic queries on a production deployment, as there are probably a large number of queries being executed at any given time on your ClickHouse deployment. | {"source_file": "query-optimization.md"} | [
0.0038478069473057985,
0.030322711914777756,
0.029216770082712173,
0.06787716597318649,
-0.036621276289224625,
-0.054426614195108414,
0.028405984863638878,
0.02529955469071865,
-0.02733226865530014,
-0.0007889700937084854,
0.013667809776961803,
-0.06682276725769043,
-0.04356928914785385,
-... |
b272c885-3f55-4dc2-ac53-498e14db3cc1 | It can be difficult to identify problematic queries on a production deployment, as there are probably a large number of queries being executed at any given time on your ClickHouse deployment.Β
If you know which user, database, or tables are having issues, you can use the fields
user
,
tables
, or
databases
from the
system.query_logs
to narrow down the search.Β
Once you identify the queries you want to optimize, you can start working on them to optimize. One common mistake developers make at this stage is changing multiple things simultaneously, running ad-hoc experiments, and usually ending up with mixed results, but, more importantly, missing a good understanding of what made the query faster.Β
Query optimization requires structure. I'm not talking about advanced benchmarking, but having a simple process in place to understand how your changes affect query performance can go a long way.Β
Start by identifying your slow queries from query logs, then investigate potential improvements in isolation. When testing the query, make sure you disable the filesystem cache.Β
ClickHouse leverages
caching
to speed up query performance at different stages. This is good for query performance, but during troubleshooting, it could hide potential I/O bottlenecks or poor table schema. For this reason, I suggest turning off the filesystem cache during testing. Make sure to have it enabled in production setup.
Once you have identified potential optimizations, it is recommended that you implement them one by one to better track how they affect performance. Below is a diagram describing the general approach.
Finally, be cautious of outliers; it's pretty common that a query might run slowly, either because a user tried an ad-hoc expensive query or the system was under stress for another reason. You can group by the field normalized_query_hash to identify expensive queries that are being executed regularly. Those are probably the ones you want to investigate.
Basic optimization {#basic-optimization}
Now that we have our framework to test, we can start optimizing.
The best place to start is to look at how the data is stored. As for any database, the less data we read, the faster the query will be executed.Β
Depending on how you ingested your data, you might have leveraged ClickHouse
capabilities
to infer the table schema based on the ingested data. While this is very practical to get started, if you want to optimize your query performance, you'll need to review the data schema to best fit your use case.
Nullable {#nullable}
As described in the
best practices documentation
, avoid nullable columns wherever possible. It is tempting to use them often, as they make the data ingestion mechanism more flexible, but they negatively affect performance as an additional column has to be processed every time. | {"source_file": "query-optimization.md"} | [
0.04337030276656151,
-0.012468650005757809,
0.025230590254068375,
0.05416853353381157,
-0.03591779246926308,
-0.13348081707954407,
-0.00776388356462121,
-0.008945133537054062,
-0.07687045633792877,
0.020184926688671112,
-0.024337884038686752,
-0.02589353173971176,
0.010143850930035114,
-0.... |
1ff0f03f-7b43-4801-8d5f-823f1eee0be2 | Running an SQL query that counts the rows with a NULL value can easily reveal the columns in your tables that actually need a Nullable value.
```sql
-- Find non-null values columns
SELECT
countIf(vendor_id IS NULL) AS vendor_id_nulls,
countIf(pickup_datetime IS NULL) AS pickup_datetime_nulls,
countIf(dropoff_datetime IS NULL) AS dropoff_datetime_nulls,
countIf(passenger_count IS NULL) AS passenger_count_nulls,
countIf(trip_distance IS NULL) AS trip_distance_nulls,
countIf(fare_amount IS NULL) AS fare_amount_nulls,
countIf(mta_tax IS NULL) AS mta_tax_nulls,
countIf(tip_amount IS NULL) AS tip_amount_nulls,
countIf(tolls_amount IS NULL) AS tolls_amount_nulls,
countIf(total_amount IS NULL) AS total_amount_nulls,
countIf(payment_type IS NULL) AS payment_type_nulls,
countIf(pickup_location_id IS NULL) AS pickup_location_id_nulls,
countIf(dropoff_location_id IS NULL) AS dropoff_location_id_nulls
FROM trips_small_inferred
FORMAT VERTICAL
Query id: 4a70fc5b-2501-41c8-813c-45ce241d85ae
Row 1:
ββββββ
vendor_id_nulls: 0
pickup_datetime_nulls: 0
dropoff_datetime_nulls: 0
passenger_count_nulls: 0
trip_distance_nulls: 0
fare_amount_nulls: 0
mta_tax_nulls: 137946731
tip_amount_nulls: 0
tolls_amount_nulls: 0
total_amount_nulls: 0
payment_type_nulls: 69305
pickup_location_id_nulls: 0
dropoff_location_id_nulls: 0
```
We have only two columns with null values:
mta_tax
and
payment_type
. The rest of the fields should not be using a
Nullable
column.
Low cardinality {#low-cardinality}
An easy optimization to apply to Strings is to make best use of the LowCardinality data type. As described in the low cardinality
documentation
, ClickHouse applies dictionary coding to LowCardinality-columns, which significantly increases query performance.Β
An easy rule of thumb for determining which columns are good candidates for LowCardinality is that any column with less than 10,000 unique values is a perfect candidate.
You can use the following SQL query to find columns with a low number of unique values.
```sql
-- Identify low cardinality columns
SELECT
uniq(ratecode_id),
uniq(pickup_location_id),
uniq(dropoff_location_id),
uniq(vendor_id)
FROM trips_small_inferred
FORMAT VERTICAL
Query id: d502c6a1-c9bc-4415-9d86-5de74dd6d932
Row 1:
ββββββ
uniq(ratecode_id): 6
uniq(pickup_location_id): 260
uniq(dropoff_location_id): 260
uniq(vendor_id): 3
```
With a low cardinality, those four columns,
ratecode_id
,
pickup_location_id
,
dropoff_location_id
, and
vendor_id
, are good candidates for the LowCardinality field type.
Optimize data type {#optimize-data-type}
Clickhouse supports a large number of data types. Make sure to pick the smallest possible data type that fits your use case to optimize performance and reduce your data storage space on disk. | {"source_file": "query-optimization.md"} | [
0.08875182271003723,
-0.010801929980516434,
-0.029795655980706215,
0.040826186537742615,
-0.057682622224092484,
0.05025152117013931,
0.001948581077158451,
0.01000511646270752,
-0.027704400941729546,
-0.0014276914298534393,
0.08022065460681915,
-0.15124790370464325,
0.031104259192943573,
-0... |
aa12b2cc-a5b4-4b59-a8e3-64b0a8d2c3b2 | Clickhouse supports a large number of data types. Make sure to pick the smallest possible data type that fits your use case to optimize performance and reduce your data storage space on disk.Β
For numbers, you can check the min/max value in your dataset to check if the current precision value matches the reality of your dataset.Β
```sql
-- Find min/max values for the payment_type field
SELECT
min(payment_type),max(payment_type),
min(passenger_count), max(passenger_count)
FROM trips_small_inferred
Query id: 4306a8e1-2a9c-4b06-97b4-4d902d2233eb
ββmin(payment_type)ββ¬βmax(payment_type)ββ
1. β 1 β 4 β
βββββββββββββββββββββ΄ββββββββββββββββββββ
```
For dates, you should pick a precision that matches your dataset and is best suited to answering the queries you're planning to run.
Apply the optimizations {#apply-the-optimizations}
Let's create a new table to use the optimized schema and re-ingest the data.
``sql
-- Create table with optimized data
CREATE TABLE trips_small_no_pk
(
vendor_id
LowCardinality(String),
pickup_datetime
DateTime,
dropoff_datetime
DateTime,
passenger_count
UInt8,
trip_distance
Float32,
ratecode_id
LowCardinality(String),
pickup_location_id
LowCardinality(String),
dropoff_location_id
LowCardinality(String),
payment_type
Nullable(UInt8),
fare_amount
Decimal32(2),
extra
Decimal32(2),
mta_tax
Nullable(Decimal32(2)),
tip_amount
Decimal32(2),
tolls_amount
Decimal32(2),
total_amount` Decimal32(2)
)
ORDER BY tuple();
-- Insert the data
INSERT INTO trips_small_no_pk SELECT * FROM trips_small_inferred
```
We run the queries again using the new table to check for improvement.Β
| Name | Run 1 - Elapsed | Elapsed | Rows processed | Peak memory |
| ------- | --------------- | --------- | -------------- | ----------- |
| Query 1 | 1.699 sec | 1.353 sec | 329.04 million | 337.12 MiB |
| Query 2 | 1.419 sec | 1.171 sec | 329.04 million | 531.09 MiB |
| Query 3 | 1.414 sec | 1.188 sec | 329.04 million | 265.05 MiB |
We notice some improvements in both query time and memory usage. Thanks to the optimization in the data schema, we reduce the total volume of data that represents our data, leading to improved memory consumption and reduced processing time.Β
Let's check the size of the tables to see the difference.Β
``sql
SELECT
table
,
formatReadableSize(sum(data_compressed_bytes) AS size) AS compressed,
formatReadableSize(sum(data_uncompressed_bytes) AS usize) AS uncompressed,
sum(rows) AS rows
FROM system.parts
WHERE (active = 1) AND ((
table
= 'trips_small_no_pk') OR (
table
= 'trips_small_inferred'))
GROUP BY
database,
table`
ORDER BY size DESC
Query id: 72b5eb1c-ff33-4fdb-9d29-dd076ac6f532 | {"source_file": "query-optimization.md"} | [
0.0768590122461319,
-0.025170933455228806,
-0.01472439430654049,
0.037111420184373856,
-0.09682539105415344,
-0.024415956810116768,
0.009890963323414326,
0.009707081131637096,
-0.06555133312940598,
0.024107426404953003,
0.03855302557349205,
-0.04299546778202057,
-0.0037761928979307413,
-0.... |
214395ae-b21c-4fef-9357-c6f3234b4fc9 | Query id: 72b5eb1c-ff33-4fdb-9d29-dd076ac6f532
ββtableβββββββββββββββββ¬βcompressedββ¬βuncompressedββ¬ββββββrowsββ
1. β trips_small_inferred β 7.38 GiB β 37.41 GiB β 329044175 β
2. β trips_small_no_pk β 4.89 GiB β 15.31 GiB β 329044175 β
ββββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββββ΄ββββββββββββ
```
The new table is considerably smaller than the previous one. We see a reduction of about 34% in disk space for the table (7.38 GiB vs 4.89 GiB).
The importance of primary keys {#the-importance-of-primary-keys}
Primary keys in ClickHouse work differently than in most traditional database systems. In those systems, primary keys enforce uniqueness and data integrity. Any attempt to insert duplicate primary key values is rejected, and a B-tree or hash-based index is usually created for fast lookup.Β
In ClickHouse, the primary key's
objective
is different; it does not enforce uniqueness or help with data integrity. Instead, it is designed to optimize query performance. The primary key defines the order in which the data is stored on disk and is implemented as a sparse index that stores pointers to the first row of each granule.
Granules in ClickHouse are the smallest units of data read during query execution. They contain up to a fixed number of rows, determined by index_granularity, with a default value of 8192 rows. Granules are stored contiguously and sorted by the primary key.Β
Selecting a good set of primary keys is important for performance, and it's actually common to store the same data in different tables and use different sets of primary keys to speed up a specific set of queries.Β
Other options supported by ClickHouse, such as Projection or Materialized view, allow you to use a different set of primary keys on the same data. The second part of this blog series will cover this in more detail.Β
Choose primary keys {#choose-primary-keys}
Choosing the correct set of primary keys is a complex topic, and it might require trade-offs and experiments to find the best combination.Β
For now, we're going to follow these simple practices:Β
Use fields that are used to filter in most queries
Choose columns with lower cardinality firstΒ
Consider a time-based component in your primary key, as filtering by time on a timestamp dataset is pretty common.Β
In our case, we will experiment with the following primary keys:
passenger_count
,
pickup_datetime
, and
dropoff_datetime
.Β
The cardinality for passenger_count is small (24 unique values) and used in our slow queries. We also add timestamp fields (
pickup_datetime
and
dropoff_datetime
) as they can be filtered often.
Create a new table with the primary keys and re-ingest the data. | {"source_file": "query-optimization.md"} | [
-0.013803026638925076,
-0.022524699568748474,
-0.05866090953350067,
-0.00993155874311924,
0.0011443094117566943,
-0.05440274253487587,
0.02310275472700596,
0.011741796508431435,
0.05413685366511345,
0.05692434683442116,
0.004005866125226021,
0.035522110760211945,
0.035821475088596344,
-0.0... |
f3139d4f-1762-417f-84a5-3d8b22e48e1f | Create a new table with the primary keys and re-ingest the data.
``sql
CREATE TABLE trips_small_pk
(
vendor_id
UInt8,
pickup_datetime
DateTime,
dropoff_datetime
DateTime,
passenger_count
UInt8,
trip_distance
Float32,
ratecode_id
LowCardinality(String),
pickup_location_id
UInt16,
dropoff_location_id
UInt16,
payment_type
Nullable(UInt8),
fare_amount
Decimal32(2),
extra
Decimal32(2),
mta_tax
Nullable(Decimal32(2)),
tip_amount
Decimal32(2),
tolls_amount
Decimal32(2),
total_amount` Decimal32(2)
)
PRIMARY KEY (passenger_count, pickup_datetime, dropoff_datetime);
-- Insert the data
INSERT INTO trips_small_pk SELECT * FROM trips_small_inferred
```
We then rerun our queries. We compile the results from the three experiments to see the improvements in elapsed time, rows processed, and memory consumption.Β
Query 1
Run 1
Run 2
Run 3
Elapsed
1.699 sec
1.353 sec
0.765 sec
Rows processed
329.04 million
329.04 million
329.04 million
Peak memory
440.24 MiB
337.12 MiB
444.19 MiB
Query 2
Run 1
Run 2
Run 3
Elapsed
1.419 sec
1.171 sec
0.248 sec
Rows processed
329.04 million
329.04 million
41.46 million
Peak memory
546.75 MiB
531.09 MiB
173.50 MiB
Query 3
Run 1
Run 2
Run 3
Elapsed
1.414 sec
1.188 sec
0.431 sec
Rows processed
329.04 million
329.04 million
276.99 million
Peak memory
451.53 MiB
265.05 MiB
197.38 MiB
We can see significant improvement across the board in execution time and memory used.Β
Query 2 benefits most from the primary key. Let's have a look at how the query plan generated is different from before.
```sql
EXPLAIN indexes = 1
SELECT
payment_type,
COUNT() AS trip_count,
formatReadableQuantity(SUM(trip_distance)) AS total_distance,
AVG(total_amount) AS total_amount_avg,
AVG(tip_amount) AS tip_amount_avg
FROM nyc_taxi.trips_small_pk
WHERE (pickup_datetime >= '2009-01-01') AND (pickup_datetime < '2009-04-01')
GROUP BY payment_type
ORDER BY trip_count DESC
Query id: 30116a77-ba86-4e9f-a9a2-a01670ad2e15
ββexplainβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Expression ((Projection + Before ORDER BY [lifted up part])) β
β Sorting (Sorting for ORDER BY) β
β Expression (Before ORDER BY) β
β Aggregating β
β Expression (Before GROUP BY) β
β Expression β | {"source_file": "query-optimization.md"} | [
0.03726192191243172,
0.023763995617628098,
-0.031232217326760292,
0.07249590754508972,
-0.07684491574764252,
-0.04415913298726082,
0.01521464716643095,
0.05180246755480766,
-0.0038033537566661835,
0.015381445176899433,
0.06088060885667801,
-0.06623311340808868,
0.008715006522834301,
-0.057... |
192059a6-3178-4be2-af74-c3cd898d7e7d | β Expression β
β ReadFromMergeTree (nyc_taxi.trips_small_pk) β
β Indexes: β
β PrimaryKey β
β Keys: β
β pickup_datetime β
β Condition: and((pickup_datetime in (-Inf, 1238543999]), (pickup_datetime in [1230768000, +Inf))) β
β Parts: 9/9 β
β Granules: 5061/40167 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
Thanks to the primary key, only a subset of the table granules has been selected. This alone greatly improves the query performance since ClickHouse has to process significantly less data.
Next steps {#next-steps}
Hopefully this guide gets a good understanding on how to investigate slow queries with ClickHouse and how to make them faster. To explore more on this topic, you can read more about
query analyzer
and
profiling
to understand better how exactly ClickHouse is executing your query.
As you get more familiar with ClickHouse specificities, I would recommend to read about
partitioning keys
and
data skipping indexes
to learn about more advanced techniques you can use to accelerate your queries. | {"source_file": "query-optimization.md"} | [
0.06682773679494858,
-0.00397516880184412,
0.05150473117828369,
0.07496041804552078,
0.014792193658649921,
-0.043354570865631104,
0.07764651626348495,
-0.01001440454274416,
-0.046040695160627365,
-0.01783520169556141,
0.003642478957772255,
-0.022644469514489174,
-0.039036594331264496,
-0.0... |
64691cf0-c3ae-4337-b21a-c4120844f190 | slug: /optimize/avoidoptimizefinal
sidebar_label: 'Avoid optimize final'
title: 'Avoid Optimize Final'
description: 'Using the OPTIMIZE TABLE ... FINAL query will initiate an unscheduled merge of data parts.'
doc_type: 'guide'
keywords: ['avoid optimize final', 'optimize table final', 'best practices', 'merge data parts', 'performance optimization']
import Content from '@site/docs/best-practices/_snippets/_avoid_optimize_final.md'; | {"source_file": "avoidoptimizefinal.md"} | [
0.027264632284641266,
0.11884444952011108,
-0.00386513932608068,
0.05182744562625885,
0.0049564712680876255,
-0.030329864472150803,
0.007973439060151577,
0.07376754283905029,
-0.09476440399885178,
0.02824586071074009,
0.06500805914402008,
0.051005955785512924,
0.07513434439897537,
-0.07844... |
2be39b66-6070-41ef-b6d5-2378214e6e76 | slug: /optimize/avoid-mutations
sidebar_label: 'Avoid mutations'
title: 'Avoid Mutations'
description: 'Mutations refers to ALTER queries that manipulate table data'
doc_type: 'guide'
keywords: ['avoid mutations', 'ALTER queries', 'table data manipulation', 'best practices', 'performance optimization']
import Content from '@site/docs/best-practices/_snippets/_avoid_mutations.md'; | {"source_file": "avoidmutations.md"} | [
-0.015101060271263123,
0.06521293520927429,
-0.05096358805894852,
0.03923286870121956,
0.0031447713263332844,
-0.06662007421255112,
0.0024385079741477966,
0.03740345686674118,
-0.0627056136727333,
0.017925655469298363,
0.07485584169626236,
0.022869186475872993,
0.09565875679254532,
-0.0776... |
76b062dd-40e2-4ac3-ab7b-9c76c9d390be | slug: /optimize/prewhere
sidebar_label: 'PREWHERE optimization'
sidebar_position: 21
description: 'PREWHERE reduces I/O by avoiding reading unnecessary column data.'
title: 'How does the PREWHERE optimization work?'
doc_type: 'guide'
keywords: ['prewhere', 'query optimization', 'performance', 'filtering', 'best practices']
import visual01 from '@site/static/images/guides/best-practices/prewhere_01.gif';
import visual02 from '@site/static/images/guides/best-practices/prewhere_02.gif';
import visual03 from '@site/static/images/guides/best-practices/prewhere_03.gif';
import visual04 from '@site/static/images/guides/best-practices/prewhere_04.gif';
import visual05 from '@site/static/images/guides/best-practices/prewhere_05.gif';
import Image from '@theme/IdealImage';
How does the PREWHERE optimization work?
The
PREWHERE clause
is a query execution optimization in ClickHouse. It reduces I/O and improves query speed by avoiding unnecessary data reads, and filtering out irrelevant data before reading non-filter columns from disk.
This guide explains how PREWHERE works, how to measure its impact, and how to tune it for best performance.
Query processing without PREWHERE optimization {#query-processing-without-prewhere-optimization}
We'll start by illustrating how a query on the
uk_price_paid_simple
table is processed without using PREWHERE:
β The query includes a filter on the
town
column, which is part of the table's primary key, and therefore also part of the primary index.
β‘ To accelerate the query, ClickHouse loads the table's primary index into memory.
β’ It scans the index entries to identify which granules from the town column might contain rows matching the predicate.
β£ These potentially relevant granules are loaded into memory, along with positionally aligned granules from any other columns needed for the query.
β€ The remaining filters are then applied during query execution.
As you can see, without PREWHERE, all potentially relevant columns are loaded before filtering, even if only a few rows actually match.
How PREWHERE improves query efficiency {#how-prewhere-improves-query-efficiency}
The following animations show how the query from above is processed with a PREWHERE clause applied to all query predicates.
The first three processing steps are the same as before:
β The query includes a filter on the
town
column, which is part of the table's primary keyβand therefore also part of the primary index.
β‘ Similar to the run without the PREWHERE clause, to accelerate the query, ClickHouse loads the primary index into memory,
β’ then scans the index entries to identify which granules from the
town
column might contain rows matching the predicate.
Now, thanks to the PREWHERE clause, the next step differs: Instead of reading all relevant columns up front, ClickHouse filters data column by column, only loading what's truly needed. This drastically reduces I/O, especially for wide tables. | {"source_file": "prewhere.md"} | [
0.05251097306609154,
0.07732033729553223,
-0.013206173665821552,
0.03651554882526398,
0.061931222677230835,
-0.032747771590948105,
-0.0017746498342603445,
0.07516175508499146,
-0.06697385013103485,
0.023612014949321747,
0.035596150904893875,
0.08262065798044205,
0.08870634436607361,
-0.063... |
6786fbbe-d648-4940-9ada-5f73ae10f56a | With each step, it only loads granules that contain at least one row that survivedβi.e., matchedβthe previous filter. As a result, the number of granules to load and evaluate for each filter decreases monotonically:
Step 1: Filtering by town
ClickHouse begins PREWHERE processing by β reading the selected granules from the
town
column and checking which ones actually contain rows matching
London
.
In our example, all selected granules do match, so β‘ the corresponding positionally aligned granules for the next filter columnβ
date
βare then selected for processing:
Step 2: Filtering by date
Next, ClickHouse β reads the selected
date
column granules to evaluate the filter
date > '2024-12-31'
.
In this case, two out of three granules contain matching rows, so β‘ only their positionally aligned granules from the next filter columnβ
price
βare selected for further processing:
Step 3: Filtering by price
Finally, ClickHouse β reads the two selected granules from the
price
column to evaluate the last filter
price > 10_000
.
Only one of the two granules contains matching rows, so β‘ just its positionally aligned granule from the
SELECT
columnβ
street
βneeds to be loaded for further processing:
By the final step, only the minimal set of column granules, those containing matching rows, are loaded. This leads to lower memory usage, less disk I/O, and faster query execution.
:::note PREWHERE reduces data read, not rows processed
Note that ClickHouse processes the same number of rows in both the PREWHERE and non-PREWHERE versions of the query. However, with PREWHERE optimizations applied, not all column values need to be loaded for every processed row.
:::
PREWHERE optimization is automatically applied {#prewhere-optimization-is-automatically-applied}
The PREWHERE clause can be added manually, as shown in the example above. However, you don't need to write PREWHERE manually. When the setting
optimize_move_to_prewhere
is enabled (true by default), ClickHouse automatically moves filter conditions from WHERE to PREWHERE, prioritizing those that will reduce read volume the most.
The idea is that smaller columns are faster to scan, and by the time larger columns are processed, most granules have already been filtered out. Since all columns have the same number of rows, a column's size is primarily determined by its data type, for example, a
UInt8
column is generally much smaller than a
String
column.
ClickHouse follows this strategy by default as of version
23.2
, sorting PREWHERE filter columns for multi-step processing in ascending order of uncompressed size.
Starting with version
23.11
, optional column statistics can further improve this by choosing the filter processing order based on actual data selectivity, not just column size.
How to measure PREWHERE impact {#how-to-measure-prewhere-impact} | {"source_file": "prewhere.md"} | [
-0.028644904494285583,
-0.015104382298886776,
0.0806141272187233,
-0.041450146585702896,
0.01427721232175827,
-0.02667839825153351,
0.020327413454651833,
-0.048466503620147705,
0.010550575330853462,
-0.03016669861972332,
-0.04115927964448929,
-0.07520722597837448,
0.01653248816728592,
-0.0... |
61450139-4bbb-41c1-a94c-d387ef945033 | How to measure PREWHERE impact {#how-to-measure-prewhere-impact}
To validate that PREWHERE is helping your queries, you can compare query performance with and without the
optimize_move_to_prewhere setting
enabled.
We begin by running the query with the
optimize_move_to_prewhere
setting disabled:
sql
SELECT
street
FROM
uk.uk_price_paid_simple
WHERE
town = 'LONDON' AND date > '2024-12-31' AND price < 10_000
SETTINGS optimize_move_to_prewhere = false;
```txt
ββstreetβββββββ
1. β MOYSER ROAD β
2. β AVENUE ROAD β
3. β AVENUE ROAD β
βββββββββββββββ
3 rows in set. Elapsed: 0.056 sec. Processed 2.31 million rows, 23.36 MB (41.09 million rows/s., 415.43 MB/s.)
Peak memory usage: 132.10 MiB.
```
ClickHouse read
23.36 MB
of column data while processing 2.31 million rows for the query.
Next, we run the query with the
optimize_move_to_prewhere
setting enabled. (Note that this setting is optional, as the setting is enabled by default):
sql
SELECT
street
FROM
uk.uk_price_paid_simple
WHERE
town = 'LONDON' AND date > '2024-12-31' AND price < 10_000
SETTINGS optimize_move_to_prewhere = true;
```txt
ββstreetβββββββ
1. β MOYSER ROAD β
2. β AVENUE ROAD β
3. β AVENUE ROAD β
βββββββββββββββ
3 rows in set. Elapsed: 0.017 sec. Processed 2.31 million rows, 6.74 MB (135.29 million rows/s., 394.44 MB/s.)
Peak memory usage: 132.11 MiB.
```
The same number of rows was processed (2.31 million), but thanks to PREWHERE, ClickHouse read over three times less column dataβjust 6.74 MB instead of 23.36 MBβwhich cut the total runtime by a factor of 3.
For deeper insight into how ClickHouse applies PREWHERE behind the scenes, use EXPLAIN and trace logs.
We inspect the query's logical plan using the
EXPLAIN
clause:
sql
EXPLAIN PLAN actions = 1
SELECT
street
FROM
uk.uk_price_paid_simple
WHERE
town = 'LONDON' and date > '2024-12-31' and price < 10_000;
txt
...
Prewhere info
Prewhere filter column:
and(greater(__table1.date, '2024-12-31'_String),
less(__table1.price, 10000_UInt16),
equals(__table1.town, 'LONDON'_String))
...
We omit most of the plan output here, as it's quite verbose. In essence, it shows that all three column predicates were automatically moved to PREWHERE.
When reproducing this yourself, you'll also see in the query plan that the order of these predicates is based on the columns' data type sizes. Since we haven't enabled column statistics, ClickHouse uses size as the fallback for determining the PREWHERE processing order.
If you want to go even further under the hood, you can observe each individual PREWHERE processing step by instructing ClickHouse to return all test-level log entries during query execution: | {"source_file": "prewhere.md"} | [
0.08471346646547318,
-0.030938688665628433,
-0.01892063207924366,
0.05557806044816971,
-0.026418063789606094,
-0.08202601224184036,
0.04674072936177254,
0.01295201014727354,
-0.06183155998587608,
0.04658138006925583,
0.043388381600379944,
-0.011546731926500797,
0.010656364262104034,
-0.059... |
d6abd7b0-b671-4f89-8240-4e8270d113e4 | If you want to go even further under the hood, you can observe each individual PREWHERE processing step by instructing ClickHouse to return all test-level log entries during query execution:
sql
SELECT
street
FROM
uk.uk_price_paid_simple
WHERE
town = 'LONDON' AND date > '2024-12-31' AND price < 10_000
SETTINGS send_logs_level = 'test';
txt
...
<Trace> ... Condition greater(date, '2024-12-31'_String) moved to PREWHERE
<Trace> ... Condition less(price, 10000_UInt16) moved to PREWHERE
<Trace> ... Condition equals(town, 'LONDON'_String) moved to PREWHERE
...
<Test> ... Executing prewhere actions on block: greater(__table1.date, '2024-12-31'_String)
<Test> ... Executing prewhere actions on block: less(__table1.price, 10000_UInt16)
...
Key takeaways {#key-takeaways}
PREWHERE avoids reading column data that will later be filtered out, saving I/O and memory.
It works automatically when
optimize_move_to_prewhere
is enabled (default).
Filtering order matters: small and selective columns should go first.
Use
EXPLAIN
and logs to verify PREWHERE is applied and understand its effect.
PREWHERE is most impactful on wide tables and large scans with selective filters. | {"source_file": "prewhere.md"} | [
0.06214151903986931,
0.009821825660765171,
0.01474938727915287,
0.07459654659032822,
-0.009623954072594643,
-0.031941741704940796,
0.02793327532708645,
-0.01565314270555973,
-0.05882934853434563,
0.06033855676651001,
0.059412818402051926,
-0.08041317760944366,
0.02709106169641018,
-0.09696... |
0c95383c-81bd-4308-b03c-f41a9aa4a55c | slug: /optimize/query-parallelism
sidebar_label: 'Query parallelism'
sidebar_position: 20
description: 'ClickHouse parallelizes query execution using processing lanes and the max_threads setting.'
title: 'How ClickHouse executes a query in parallel'
doc_type: 'guide'
keywords: ['parallel processing', 'query optimization', 'performance', 'threading', 'best practices']
import visual01 from '@site/static/images/guides/best-practices/query-parallelism_01.gif';
import visual02 from '@site/static/images/guides/best-practices/query-parallelism_02.gif';
import visual03 from '@site/static/images/guides/best-practices/query-parallelism_03.gif';
import visual04 from '@site/static/images/guides/best-practices/query-parallelism_04.gif';
import visual05 from '@site/static/images/guides/best-practices/query-parallelism_05.png';
import Image from '@theme/IdealImage';
How ClickHouse executes a query in parallel
ClickHouse is
built for speed
. It executes queries in a highly parallel fashion, using all available CPU cores, distributing data across processing lanes, and often pushing hardware close to its limits.
This guide walks through how query parallelism works in ClickHouse and how you can tune or monitor it to improve performance on large workloads.
We use an aggregation query on the
uk_price_paid_simple
dataset to illustrate key concepts.
Step-by-step: How ClickHouse parallelizes an aggregation query {#step-by-step-how-clickHouse-parallelizes-an-aggregation-query}
When ClickHouse β runs an aggregation query with a filter on the table's primary key, it β‘ loads the primary index into memory to β’ identify which granules need to be processed, and which can be safely skipped:
Distributing work across processing lanes {#distributing-work-across-processing-lanes}
The selected data is then
dynamically
distributed across
n
parallel
processing lanes
, which stream and process the data
block
by block into the final result:
The number of
n
parallel processing lanes is controlled by the
max_threads
setting, which by default matches the number of CPU cores available to ClickHouse on the server. In the example above, we assume
4
cores.
On a machine with
8
cores, query processing throughput would roughly double (but memory usage would also increase accordingly), as more lanes process data in parallel:
Efficient lane distribution is key to maximizing CPU utilization and reducing total query time.
Processing queries on sharded tables {#processing-queries-on-sharded-tables}
When table data is distributed across multiple servers as
shards
, each server processes its shard in parallel. Within each server, the local data is handled using parallel processing lanes, just as described above:
The server that initially receives the query collects all sub-results from the shards and combines them into the final global result. | {"source_file": "query-parallelism.md"} | [
-0.01452836487442255,
0.00829064566642046,
-0.05778435245156288,
0.010905072093009949,
-0.04426414147019386,
-0.09016875177621841,
-0.05016003176569939,
0.031127918511629105,
-0.07809390127658844,
0.0005396041669882834,
0.04998940974473953,
-0.0034793855156749487,
0.08064394444227219,
-0.0... |
09e3a7c2-7f97-4302-9c62-4bfd4b75246a | The server that initially receives the query collects all sub-results from the shards and combines them into the final global result.
Distributing query load across shards allows horizontal scaling of parallelism, especially for high-throughput environments.
:::note ClickHouse Cloud uses parallel replicas instead of shards
In ClickHouse Cloud, this same parallelism is achieved through
parallel replicas
, which function similarly to shards in shared-nothing clusters. Each ClickHouse Cloud replicaβa stateless compute nodeβprocesses a portion of the data in parallel and contributes to the final result, just like an independent shard would.
:::
Monitoring query parallelism {#monitoring-query-parallelism}
Use these tools to verify that your query fully utilizes available CPU resources and to diagnose when it doesn't.
We're running this on a test server with 59 CPU cores, which allows ClickHouse to fully showcase its query parallelism.
To observe how the example query is executed, we can instruct the ClickHouse server to return all trace-level log entries during the aggregation query. For this demonstration, we removed the query's predicateβotherwise, only 3 granules would be processed, which isn't enough data for ClickHouse to make use of more than a few parallel processing lanes:
sql runnable=false
SELECT
max(price)
FROM
uk.uk_price_paid_simple
SETTINGS send_logs_level='trace';
txt
β <Debug> ...: 3609 marks to read from 3 ranges
β‘ <Trace> ...: Spreading mark ranges among streams
β‘ <Debug> ...: Reading approx. 29564928 rows with 59 streams
We can see that
β ClickHouse needs to read 3,609 granules (indicated as marks in the trace logs) across 3 data ranges.
β‘ With 59 CPU cores, it distributes this work across 59 parallel processing streamsβone per lane.
Alternatively, we can use the
EXPLAIN
clause to inspect the
physical operator plan
βalso known as the "query pipeline"βfor the aggregation query:
sql runnable=false
EXPLAIN PIPELINE
SELECT
max(price)
FROM
uk.uk_price_paid_simple; | {"source_file": "query-parallelism.md"} | [
0.023950990289449692,
-0.02348487451672554,
0.04976712167263031,
0.07522610574960709,
-0.005468134302645922,
-0.11214608699083328,
-0.02908690646290779,
-0.06515438854694366,
0.06798958778381348,
0.04725491628050804,
-0.05789214372634888,
-0.037038955837488174,
0.06215474754571915,
-0.0652... |
49d2f300-adf2-4142-8afd-87846792f66d | sql runnable=false
EXPLAIN PIPELINE
SELECT
max(price)
FROM
uk.uk_price_paid_simple;
txt
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1. β (Expression) β
2. β ExpressionTransform Γ 59 β
3. β (Aggregating) β
4. β Resize 59 β 59 β
5. β AggregatingTransform Γ 59 β
6. β StrictResize 59 β 59 β
7. β (Expression) β
8. β ExpressionTransform Γ 59 β
9. β (ReadFromMergeTree) β
10. β MergeTreeSelect(pool: PrefetchedReadPool, algorithm: Thread) Γ 59 0 β 1 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Note: Read the operator plan above from bottom to top. Each line represents a stage in the physical execution plan, starting with reading data from storage at the bottom and ending with the final processing steps at the top. Operators marked with
Γ 59
are executed concurrently on non-overlapping data regions across 59 parallel processing lanes. This reflects the value of
max_threads
and illustrates how each stage of the query is parallelized across CPU cores.
ClickHouse's
embedded web UI
(available at the
/play
endpoint) can render the physical plan from above as a graphical visualization. In this example, we set
max_threads
to
4
to keep the visualization compact, showing just 4 parallel processing lanes:
Note: Read the visualization from left to right. Each row represents a parallel processing lane that streams data block by block, applying transformations such as filtering, aggregation, and final processing stages. In this example, you can see four parallel lanes corresponding to the
max_threads = 4
setting.
Load balancing across processing lanes {#load-balancing-across-processing-lanes}
Note that the
Resize
operators in the physical plan above
repartition and redistribute
data block streams across processing lanes to keep them evenly utilized. This rebalancing is especially important when data ranges vary in how many rows match the query predicates, otherwise, some lanes may become overloaded while others sit idle. By redistributing the work, faster lanes effectively help out slower ones, optimizing overall query runtime.
Why max_threads isn't always respected {#why-max-threads-isnt-always-respected} | {"source_file": "query-parallelism.md"} | [
-0.003587067127227783,
-0.029607541859149933,
-0.01718282885849476,
0.009368648752570152,
-0.08447450399398804,
-0.09220299869775772,
-0.028162967413663864,
0.04552130401134491,
-0.03468859940767288,
0.022506238892674446,
-0.04311368241906166,
-0.003797160694375634,
-0.00477156788110733,
-... |
6e382241-9364-436a-b194-4d73f6d402f2 | Why max_threads isn't always respected {#why-max-threads-isnt-always-respected}
As mentioned above, the number of
n
parallel processing lanes is controlled by the
max_threads
setting, which by default matches the number of CPU cores available to ClickHouse on the server:
sql runnable=false
SELECT getSetting('max_threads');
txt
ββgetSetting('max_threads')ββ
1. β 59 β
βββββββββββββββββββββββββββββ
However, the
max_threads
value may be ignored depending on the amount of data selected for processing:
sql runnable=false
EXPLAIN PIPELINE
SELECT
max(price)
FROM
uk.uk_price_paid_simple
WHERE town = 'LONDON';
txt
...
(ReadFromMergeTree)
MergeTreeSelect(pool: PrefetchedReadPool, algorithm: Thread) Γ 30
As shown in the operator plan extract above, even though
max_threads
is set to
59
, ClickHouse uses only
30
concurrent streams to scan the data.
Now let's run the query:
sql runnable=false
SELECT
max(price)
FROM
uk.uk_price_paid_simple
WHERE town = 'LONDON';
```txt
ββmax(price)ββ
1. β 594300000 β -- 594.30 million
ββββββββββββββ
1 row in set. Elapsed: 0.013 sec. Processed 2.31 million rows, 13.66 MB (173.12 million rows/s., 1.02 GB/s.)
Peak memory usage: 27.24 MiB.
```
As shown in the output above, the query processed 2.31 million rows and read 13.66MB of data. This is because, during the index analysis phase, ClickHouse selected
282 granules
for processing, each containing 8,192 rows, totaling approximately 2.31 million rows:
sql runnable=false
EXPLAIN indexes = 1
SELECT
max(price)
FROM
uk.uk_price_paid_simple
WHERE town = 'LONDON';
txt
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββ
1. β Expression ((Project names + Projection)) β
2. β Aggregating β
3. β Expression (Before GROUP BY) β
4. β Expression β
5. β ReadFromMergeTree (uk.uk_price_paid_simple) β
6. β Indexes: β
7. β PrimaryKey β
8. β Keys: β
9. β town β
10. β Condition: (town in ['LONDON', 'LONDON']) β
11. β Parts: 3/3 β
12. β Granules: 282/3609 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Regardless of the configured
max_threads
value, ClickHouse only allocates additional parallel processing lanes when there's enough data to justify them. The "max" in
max_threads
refers to an upper limit, not a guaranteed number of threads used.
What "enough data" means is primarily determined by two settings, which define the minimum number of rows (163,840 by default) and the minimum number of bytes (2,097,152 by default) that each processing lane should handle: | {"source_file": "query-parallelism.md"} | [
0.009213841520249844,
-0.0668950155377388,
-0.008987975306808949,
-0.01569853536784649,
-0.06653514504432678,
-0.12373349815607071,
0.00988326221704483,
-0.034689761698246,
-0.01741124875843525,
0.02233550138771534,
-0.017001885920763016,
-0.018549639731645584,
-0.01648298278450966,
-0.121... |
76d1390d-f6af-4365-8145-a8c12171ad71 | For shared-nothing clusters:
*
merge_tree_min_rows_for_concurrent_read
*
merge_tree_min_bytes_for_concurrent_read
For clusters with shared storage (e.g. ClickHouse Cloud):
*
merge_tree_min_rows_for_concurrent_read_for_remote_filesystem
*
merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem
Additionally, there's a hard lower limit for read task size, controlled by:
*
Merge_tree_min_read_task_size
+
merge_tree_min_bytes_per_task_for_remote_reading
:::warning Don't modify these settings
We don't recommend modifying these settings in production. They're shown here solely to illustrate why
max_threads
doesn't always determine the actual level of parallelism.
:::
For demonstration purposes, let's inspect the physical plan with these settings overridden to force maximum concurrency:
sql runnable=false
EXPLAIN PIPELINE
SELECT
max(price)
FROM
uk.uk_price_paid_simple
WHERE town = 'LONDON'
SETTINGS
max_threads = 59,
merge_tree_min_read_task_size = 0,
merge_tree_min_rows_for_concurrent_read_for_remote_filesystem = 0,
merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem = 0;
txt
...
(ReadFromMergeTree)
MergeTreeSelect(pool: PrefetchedReadPool, algorithm: Thread) Γ 59
Now ClickHouse uses 59 concurrent streams to scan the data, fully respecting the configured
max_threads
.
This demonstrates that for queries on small datasets, ClickHouse will intentionally limit concurrency. Use setting overrides only for testingβnot in productionβas they can lead to inefficient execution or resource contention.
Key takeaways {#key-takeaways}
ClickHouse parallelizes queries using processing lanes tied to
max_threads
.
The actual number of lanes depends on the size of data selected for processing.
Use
EXPLAIN PIPELINE
and trace logs to analyze lane usage.
Where to find more information {#where-to-find-more-information}
If you'd like to dive deeper into how ClickHouse executes queries in parallel and how it achieves high performance at scale, explore the following resources:
Query Processing Layer β VLDB 2024 Paper (Web Edition)
- A detailed breakdown of ClickHouse's internal execution model, including scheduling, pipelining, and operator design.
Partial aggregation states explained
- A technical deep dive into how partial aggregation states enable efficient parallel execution across processing lanes.
A video tutorial walking in detail through all ClickHouse query processing steps: | {"source_file": "query-parallelism.md"} | [
0.0476028211414814,
-0.07177920639514923,
-0.02765539102256298,
0.02369963936507702,
-0.0315873920917511,
-0.07066130638122559,
-0.05726737156510353,
0.015877708792686462,
-0.028227197006344795,
0.07589936256408691,
0.021819503977894783,
-0.0364251583814621,
-0.01343461312353611,
-0.030050... |
08d3faef-61d9-4220-a259-ca5a0eea7f2c | slug: /optimize/avoid-nullable-columns
sidebar_label: 'Avoid nullable Columns'
title: 'Avoid nullable Columns'
description: 'Why Nullable Columns should be avoided in ClickHouse'
doc_type: 'guide'
keywords: ['avoid nullable columns', 'nullable columns', 'data types', 'best practices', 'performance optimization']
import Content from '@site/docs/best-practices/_snippets/_avoid_nullable_columns.md'; | {"source_file": "avoidnullablecolumns.md"} | [
0.02823665179312229,
0.06696333736181259,
-0.12104876339435577,
0.02774527296423912,
0.03893221914768219,
-0.009241471067070961,
-0.08330685645341873,
0.015536683611571789,
-0.09025179594755173,
0.037299349904060364,
0.09059545397758484,
0.054546572268009186,
0.0481860488653183,
-0.0467068... |
db642c2a-1ec7-477e-ab54-5e4f14c9acdd | slug: /optimize/partitioning-key
sidebar_label: 'Partitioning key'
title: 'Choose a Low Cardinality Partitioning Key'
description: 'Use a low cardinality partitioning key or avoid using any partitioning key for your table.'
doc_type: 'guide'
keywords: ['partitioning', 'partition key', 'data organization', 'best practices', 'performance']
import Content from '@site/docs/best-practices/partitioning_keys.mdx'; | {"source_file": "partitioningkey.md"} | [
0.09032115340232849,
0.026683861389756203,
-0.02260379120707512,
-0.009273688308894634,
0.01767788454890251,
-0.02394905872642994,
-0.005599890369921923,
0.0827920213341713,
-0.058022234588861465,
0.032630085945129395,
0.0033239407930523157,
-0.008359933272004128,
0.05289706960320473,
0.01... |
4fa3fefc-1c7d-4932-a7c3-a679056fc309 | slug: /operations/overview
sidebar_label: 'Performance and optimizations overview'
description: 'Overview page of Performance and Optimizations'
title: 'Performance and Optimizations'
keywords: ['performance optimization', 'best practices', 'optimization guide', 'ClickHouse performance', 'database optimization']
doc_type: 'reference'
import TableOfContents from '@site/docs/guides/best-practices/_snippets/_performance_optimizations_table_of_contents.md';
Performance and Optimizations
This section contains tips and best practices for improving performance with ClickHouse.
We recommend users read
Core Concepts
as a precursor to this section,
which covers the main concepts required to improve performance. | {"source_file": "index.md"} | [
0.008927097544074059,
0.01908419094979763,
-0.10278207063674927,
0.026067892089486122,
-0.01194357592612505,
-0.029999295249581337,
0.011106390506029129,
0.03695613145828247,
-0.13554233312606812,
0.02593275159597397,
-0.022144729271531105,
0.07949261367321014,
0.057113513350486755,
-0.076... |
e7e7809a-6314-4059-afc5-c59cf0417c7c | slug: /optimize/bulk-inserts
sidebar_label: 'Bulk inserts'
title: 'Bulk inserts'
description: 'Sending a smaller amount of inserts that each contain more data will reduce the number of writes required.'
keywords: ['bulk insert', 'batch insert', 'insert optimization']
doc_type: 'guide'
import Content from '@site/docs/best-practices/_snippets/_bulk_inserts.md'; | {"source_file": "bulkinserts.md"} | [
-0.0049479068256914616,
-0.0055188690312206745,
-0.0804128423333168,
0.07308356463909149,
-0.02036675624549389,
-0.046773575246334076,
-0.03850547969341278,
0.049431025981903076,
-0.06739485263824463,
0.06876036524772644,
0.0671156719326973,
0.011960864067077637,
0.11146106570959091,
-0.04... |
6a0623c5-d1cc-490a-a14d-c8f1dcc6e819 | slug: /optimize/skipping-indexes
sidebar_label: 'Data skipping indexes'
sidebar_position: 2
description: 'Skip indexes enable ClickHouse to skip reading significant chunks of data that are guaranteed to have no matching values.'
title: 'Understanding ClickHouse Data Skipping Indexes'
doc_type: 'guide'
keywords: ['skipping indexes', 'data skipping', 'performance', 'indexing', 'best practices']
import simple_skip from '@site/static/images/guides/best-practices/simple_skip.png';
import bad_skip from '@site/static/images/guides/best-practices/bad_skip.png';
import Image from '@theme/IdealImage';
Understanding ClickHouse data skipping indexes
Introduction {#introduction}
Many factors affect ClickHouse query performance. The critical element in most scenarios is whether ClickHouse can use the primary key when evaluating the query WHERE clause condition. Accordingly, selecting a primary key that applies to the most common query patterns is essential for effective table design.
Nevertheless, no matter how carefully tuned the primary key, there will inevitably be query use cases that can not efficiently use it. Users commonly rely on ClickHouse for time series type data, but they often wish to analyze that same data according to other business dimensions, such as customer id, website URL, or product number. In that case, query performance can be considerably worse because a full scan of each column value may be required to apply the WHERE clause condition. While ClickHouse is still relatively fast in those circumstances, evaluating millions or billions of individual values will cause "non-indexed" queries to execute much more slowly than those based on the primary key.
In a traditional relational database, one approach to this problem is to attach one or more "secondary" indexes to a table. This is a b-tree structure that permits the database to find all matching rows on disk in O(log(n)) time instead of O(n) time (a table scan), where n is the number of rows. However, this type of secondary index will not work for ClickHouse (or other column-oriented databases) because there are no individual rows on the disk to add to the index.
Instead, ClickHouse provides a different type of index, which in specific circumstances can significantly improve query speed. These structures are labeled "Skip" indexes because they enable ClickHouse to skip reading significant chunks of data that are guaranteed to have no matching values.
Basic operation {#basic-operation}
Users can only employ Data Skipping Indexes on the MergeTree family of tables. Each data skipping has four primary arguments:
Index name. The index name is used to create the index file in each partition. Also, it is required as a parameter when dropping or materializing the index. | {"source_file": "skipping-indexes.md"} | [
0.016621064394712448,
0.11320773512125015,
0.03685493767261505,
0.03490219637751579,
0.05131816118955612,
-0.05776825547218323,
0.03341901674866676,
-0.013481926172971725,
-0.019106362015008926,
-0.01800605095922947,
0.07394257187843323,
0.05775866284966469,
0.09943407773971558,
-0.0688535... |
3bcad78d-35bc-48c4-bafc-c6392e06adbd | Index name. The index name is used to create the index file in each partition. Also, it is required as a parameter when dropping or materializing the index.
Index expression. The index expression is used to calculate the set of values stored in the index. It can be a combination of columns, simple operators, and/or a subset of functions determined by the index type.
TYPE. The type of index controls the calculation that determines if it is possible to skip reading and evaluating each index block.
GRANULARITY. Each indexed block consists of GRANULARITY granules. For example, if the granularity of the primary table index is 8192 rows, and the index granularity is 4, each indexed "block" will be 32768 rows.
When a user creates a data skipping index, there will be two additional files in each data part directory for the table.
skp_idx_{index_name}.idx
, which contains the ordered expression values
skp_idx_{index_name}.mrk2
, which contains the corresponding offsets into the associated data column files.
If some portion of the WHERE clause filtering condition matches the skip index expression when executing a query and reading the relevant column files, ClickHouse will use the index file data to determine whether each relevant block of data must be processed or can be bypassed (assuming that the block has not already been excluded by applying the primary key). To use a very simplified example, consider the following table loaded with predictable data.
```sql
CREATE TABLE skip_table
(
my_key UInt64,
my_value UInt64
)
ENGINE MergeTree primary key my_key
SETTINGS index_granularity=8192;
INSERT INTO skip_table SELECT number, intDiv(number,4096) FROM numbers(100000000);
```
When executing a simple query that does not use the primary key, all 100 million entries in the
my_value
column are scanned:
```sql
SELECT * FROM skip_table WHERE my_value IN (125, 700)
ββmy_keyββ¬βmy_valueββ
β 512000 β 125 β
β 512001 β 125 β
β ... | ... |
ββββββββββ΄βββββββββββ
8192 rows in set. Elapsed: 0.079 sec. Processed 100.00 million rows, 800.10 MB (1.26 billion rows/s., 10.10 GB/s.
```
Now add a very basic skip index:
sql
ALTER TABLE skip_table ADD INDEX vix my_value TYPE set(100) GRANULARITY 2;
Normally skip indexes are only applied on newly inserted data, so just adding the index won't affect the above query.
To index already existing data, use this statement:
sql
ALTER TABLE skip_table MATERIALIZE INDEX vix;
Rerun the query with the newly created index:
```sql
SELECT * FROM skip_table WHERE my_value IN (125, 700)
ββmy_keyββ¬βmy_valueββ
β 512000 β 125 β
β 512001 β 125 β
β ... | ... |
ββββββββββ΄βββββββββββ
8192 rows in set. Elapsed: 0.051 sec. Processed 32.77 thousand rows, 360.45 KB (643.75 thousand rows/s., 7.08 MB/s.)
```
Instead of processing 100 million rows of 800 megabytes, ClickHouse has only read and analyzed 32768 rows of 360 kilobytes
-- four granules of 8192 rows each. | {"source_file": "skipping-indexes.md"} | [
-0.033035147935152054,
0.02458769455552101,
0.0022948216646909714,
0.03670960292220116,
0.046143222600221634,
-0.037337906658649445,
0.08495093882083893,
0.027417343109846115,
0.08392694592475891,
-0.027352066710591316,
-0.013423134572803974,
0.04935714229941368,
0.01692662015557289,
-0.08... |
058716a6-2916-401d-9430-6d5b646e39ab | Instead of processing 100 million rows of 800 megabytes, ClickHouse has only read and analyzed 32768 rows of 360 kilobytes
-- four granules of 8192 rows each.
In a more visual form, this is how the 4096 rows with a
my_value
of 125 were read and selected, and how the following rows
were skipped without reading from disk:
Users can access detailed information about skip index usage by enabling the trace when executing queries. From
clickhouse-client, set the
send_logs_level
:
sql
SET send_logs_level='trace';
This will provide useful debugging information when trying to tune query SQL and table indexes. From the above
example, the debug log shows that the skip index dropped all but two granules:
sql
<Debug> default.skip_table (933d4b2c-8cea-4bf9-8c93-c56e900eefd1) (SelectExecutor): Index `vix` has dropped 6102/6104 granules.
Skip index types {#skip-index-types}
minmax {#minmax}
This lightweight index type requires no parameters. It stores the minimum and maximum values of the index expression
for each block (if the expression is a tuple, it separately stores the values for each member of the element
of the tuple). This type is ideal for columns that tend to be loosely sorted by value. This index type is usually the least expensive to apply during query processing.
This type of index only works correctly with a scalar or tuple expression -- the index will never be applied to expressions that return an array or map data type.
set {#set}
This lightweight index type accepts a single parameter of the max_size of the value set per block (0 permits
an unlimited number of discrete values). This set contains all values in the block (or is empty if the number of values exceeds the max_size). This index type works well with columns with low cardinality within each set of granules (essentially, "clumped together") but higher cardinality overall.
The cost, performance, and effectiveness of this index is dependent on the cardinality within blocks. If each block contains a large number of unique values, either evaluating the query condition against a large index set will be very expensive, or the index will not be applied because the index is empty due to exceeding max_size.
Bloom filter types {#bloom-filter-types}
A
Bloom filter
is a data structure that allows space-efficient testing of set membership at the cost of a slight chance of false positives. A false positive is not a significant concern in the case of skip indexes because the only disadvantage is reading a few unnecessary blocks. However, the potential for false positives does mean that the indexed expression should be expected to be true, otherwise valid data may be skipped. | {"source_file": "skipping-indexes.md"} | [
0.05709846690297127,
-0.029370274394750595,
0.01146199181675911,
0.06804028898477554,
0.008486540988087654,
-0.10399085283279419,
0.03655761480331421,
-0.012415915727615356,
-0.006583582144230604,
0.0032215218525379896,
0.0172534491866827,
-0.01169682014733553,
0.020731719210743904,
-0.098... |
fb1e5ac3-6480-47de-9e3b-cd5bd3946f0c | Because Bloom filters can more efficiently handle testing for a large number of discrete values, they can be appropriate for conditional expressions that produce more values to test. In particular, a Bloom filter index can be applied to arrays, where every value of the array is tested, and to maps, by converting either the keys or values to an array using the mapKeys or mapValues function.
There are three Data Skipping Index types based on Bloom filters:
The basic
bloom_filter
which takes a single optional parameter of the allowed "false positive" rate between 0 and 1 (if unspecified, .025 is used).
The specialized
tokenbf_v1
. It takes three parameters, all related to tuning the bloom filter used: (1) the size of the filter in bytes (larger filters have fewer false positives, at some cost in storage), (2) number of hash functions applied (again, more hash filters reduce false positives), and (3) the seed for the bloom filter hash functions. See the calculator
here
for more detail on how these parameters affect bloom filter functionality.
This index works only with String, FixedString, and Map datatypes. The input expression is split into character sequences separated by non-alphanumeric characters. For example, a column value of
This is a candidate for a "full text" search
will contain the tokens
This
is
a
candidate
for
full
text
search
. It is intended for use in LIKE, EQUALS, IN, hasToken() and similar searches for words and other values within longer strings. For example, one possible use might be searching for a small number of class names or line numbers in a column of free form application log lines.
The specialized
ngrambf_v1
. This index functions the same as the token index. It takes one additional parameter before the Bloom filter settings, the size of the ngrams to index. An ngram is a character string of length
n
of any characters, so the string
A short string
with an ngram size of 4 would be indexed as:
text
'A sh', ' sho', 'shor', 'hort', 'ort ', 'rt s', 't st', ' str', 'stri', 'trin', 'ring'
This index can also be useful for text searches, particularly languages without word breaks, such as Chinese.
Skip index functions {#skip-index-functions}
The core purpose of data-skipping indexes is to limit the amount of data analyzed by popular queries. Given the analytic nature of ClickHouse data, the pattern of those queries in most cases includes functional expressions. Accordingly, skip indexes must interact correctly with common functions to be efficient. This can happen either when:
* data is inserted and the index is defined as a functional expression (with the result of the expression stored in the index files), or
* the query is processed and the expression is applied to the stored index values to determine whether to exclude the block.
Each type of skip index works on a subset of available ClickHouse functions appropriate to the index implementation listed | {"source_file": "skipping-indexes.md"} | [
0.010774503462016582,
0.0700305923819542,
0.08283759653568268,
0.02952462062239647,
0.02004212699830532,
-0.003979322500526905,
0.021704407408833504,
-0.045364800840616226,
0.11133552342653275,
0.00943745020776987,
-0.05253211781382561,
0.019196966663002968,
-0.014660445041954517,
-0.04429... |
59a1f175-dc9b-412c-bc53-032d7fdfe279 | Each type of skip index works on a subset of available ClickHouse functions appropriate to the index implementation listed
here
. In general, set indexes and Bloom filter based indexes (another type of set index) are both unordered and therefore do not work with ranges. In contrast, minmax indexes work particularly well with ranges since determining whether ranges intersect is very fast. The efficacy of partial match functions LIKE, startsWith, endsWith, and hasToken depend on the index type used, the index expression, and the particular shape of the data.
Skip index settings {#skip-index-settings}
There are two available settings that apply to skip indexes.
use_skip_indexes
(0 or 1, default 1). Not all queries can efficiently use skip indexes. If a particular filtering condition is
likely to include most granules, applying the data skipping index incurs an unnecessary, and sometimes significant, cost. Set the value to
0 for queries that are unlikely to benefit from any skip indexes.
force_data_skipping_indices
(comma separated list of index names). This setting can be used to prevent some kinds of inefficient
queries. In circumstances where querying a table is too expensive unless a skip index is used, using this setting with one or more index
names will return an exception for any query that does not use the listed index. This would prevent poorly written queries from
consuming server resources.
Skip index best practices {#skip-best-practices}
Skip indexes are not intuitive, especially for users accustomed to secondary row-based indexes from the RDMS realm or inverted indexes from document stores. To get any benefit, applying a ClickHouse data skipping index must avoid enough granule reads to offset the cost of calculating the index. Critically, if a value occurs even once in an indexed block, it means the entire block must be read into memory and evaluated, and the index cost has been needlessly incurred.
Consider the following data distribution:
Assume the primary/order by key is
timestamp
, and there is an index on
visitor_id
. Consider the following query:
sql
SELECT timestamp, url FROM table WHERE visitor_id = 1001`
A traditional secondary index would be very advantageous with this kind of data distribution. Instead of reading all 32768 rows to find
the 5 rows with the requested visitor_id, the secondary index would include just five row locations, and only those five rows would be
read from disk. The exact opposite is true for a ClickHouse data skipping index. All 32768 values in the
visitor_id
column will be tested
regardless of the type of skip index. | {"source_file": "skipping-indexes.md"} | [
-0.00298171560280025,
0.029057122766971588,
0.054152265191078186,
0.021425805985927582,
0.019250547513365746,
0.005893357563763857,
0.011284449137747288,
-0.07183143496513367,
0.013051937334239483,
-0.07270302623510361,
0.0426928773522377,
-0.003502113511785865,
-0.005399580579251051,
-0.0... |
4116ba67-5135-4b5c-bc83-75e50309c999 | Accordingly, the natural impulse to try to speed up ClickHouse queries by simply adding an index to key
columns is often incorrect. This advanced functionality should only be used after investigating other alternatives, such as modifying the primary key (see
How to Pick a Primary Key
), using projections, or using materialized views. Even when a data skipping index is appropriate, careful tuning both the index and the table
will often be necessary.
In most cases a useful skip index requires a strong correlation between the primary key and the targeted, non-primary column/expression.
If there is no correlation (as in the above diagram), the chances of the filtering condition being met by at least one of the rows in
the block of several thousand values is high and few blocks will be skipped. In contrast, if a range of values for the primary key (like time of
day) is strongly associated with the values in the potential index column (such as television viewer ages), then a minmax type of index
is likely to be beneficial. Note that it may be possible to increase this correlation when inserting data, either by including additional
columns in the sorting/ORDER BY key, or batching inserts in a way that values associated with the primary key are grouped on insert. For
example, all of the events for a particular site_id could be grouped and inserted together by the ingest process, even if the primary key
is a timestamp containing events from a large number of sites. This will result in many granules that contains only a few site ids, so many
blocks could be skipped when searching by a specific site_id value.
Another good candidate for a skip index is for high cardinality expressions where any one value is relatively sparse in the data. One example
might be an observability platform that tracks error codes in API requests. Certain error codes, while rare in the data, might be particularly
important for searches. A set skip index on the error_code column would allow bypassing the vast majority of blocks that don't contain
errors and therefore significantly improve error focused queries.
Finally, the key best practice is to test, test, test. Again, unlike b-tree secondary indexes or inverted indexes for searching documents,
data skipping index behavior is not easily predictable. Adding them to a table incurs a meaningful cost both on data ingest and on queries
that for any number of reasons don't benefit from the index. They should always be tested on real world type of data, and testing should
include variations of the type, granularity size and other parameters. Testing will often reveal patterns and pitfalls that aren't obvious from
thought experiments alone.
Related docs {#related-docs}
Best practices guide
Data skipping index examples
Manipulating data skipping indices
System table information | {"source_file": "skipping-indexes.md"} | [
0.002509539248421788,
-0.02721363492310047,
0.010161123238503933,
0.019533028826117516,
-0.0009753414196893573,
0.008465501479804516,
0.01847340539097786,
-0.06318169087171555,
0.04298314452171326,
-0.04786697402596474,
0.010179380886256695,
0.0104509387165308,
-0.008353691548109055,
-0.06... |
57a80f85-6187-46ba-9c52-6e4ccdd60aed | slug: /optimize/skipping-indexes/examples
sidebar_label: 'Data Skipping Indexes - Examples'
sidebar_position: 2
description: 'Consolidated Skip Index Examples'
title: 'Data Skipping Index Examples'
doc_type: 'guide'
keywords: ['skipping indexes', 'data skipping', 'performance', 'indexing', 'best practices']
Data skipping index examples {#data-skipping-index-examples}
This page consolidates ClickHouse data skipping index examples, showing how to declare each type, when to use them, and how to verify they're applied. All features work with
MergeTree-family tables
.
Index syntax:
sql
INDEX name expr TYPE type(...) [GRANULARITY N]
ClickHouse supports five skip index types:
| Index Type | Description |
|------------|-------------|
|
minmax
| Tracks minimum and maximum values in each granule |
|
set(N)
| Stores up to N distinct values per granule |
|
bloom_filter([false_positive_rate])
| Probabilistic filter for existence checks |
|
ngrambf_v1
| N-gram bloom filter for substring searches |
|
tokenbf_v1
| Token-based bloom filter for full-text searches |
Each section provides examples with sample data and demonstrates how to verify index usage in query execution.
MinMax index {#minmax-index}
The
minmax
index is best for range predicates on loosely sorted data or columns correlated with
ORDER BY
.
```sql
-- Define in CREATE TABLE
CREATE TABLE events
(
ts DateTime,
user_id UInt64,
value UInt32,
INDEX ts_minmax ts TYPE minmax GRANULARITY 1
)
ENGINE=MergeTree
ORDER BY ts;
-- Or add later and materialize
ALTER TABLE events ADD INDEX ts_minmax ts TYPE minmax GRANULARITY 1;
ALTER TABLE events MATERIALIZE INDEX ts_minmax;
-- Query that benefits from the index
SELECT count() FROM events WHERE ts >= now() - 3600;
-- Verify usage
EXPLAIN indexes = 1
SELECT count() FROM events WHERE ts >= now() - 3600;
```
See a
worked example
with
EXPLAIN
and pruning.
Set index {#set-index}
Use the
set
index when local (per-block) cardinality is low; not helpful if each block has many distinct values.
```sql
ALTER TABLE events ADD INDEX user_set user_id TYPE set(100) GRANULARITY 1;
ALTER TABLE events MATERIALIZE INDEX user_set;
SELECT * FROM events WHERE user_id IN (101, 202);
EXPLAIN indexes = 1
SELECT * FROM events WHERE user_id IN (101, 202);
```
A creation/materialization workflow and the before/after effect are shown in the
basic operation guide
.
Generic Bloom filter (scalar) {#generic-bloom-filter-scalar}
The
bloom_filter
index is good for "needle in a haystack" equality/IN membership. It accepts an optional parameter which is the false-positive rate (default 0.025).
```sql
ALTER TABLE events ADD INDEX value_bf value TYPE bloom_filter(0.01) GRANULARITY 3;
ALTER TABLE events MATERIALIZE INDEX value_bf;
SELECT * FROM events WHERE value IN (7, 42, 99);
EXPLAIN indexes = 1
SELECT * FROM events WHERE value IN (7, 42, 99);
``` | {"source_file": "skipping-indexes-examples.md"} | [
0.06194494292140007,
-0.011891200207173824,
0.09508699178695679,
0.032226815819740295,
-0.018362168222665787,
-0.04888846352696419,
0.021036524325609207,
-0.00849500484764576,
-0.06654227524995804,
-0.024957910180091858,
0.048641376197338104,
-0.005398642271757126,
0.04431501403450966,
-0.... |
470b6219-5b4f-4ee1-bf02-6b0adc34558c | SELECT * FROM events WHERE value IN (7, 42, 99);
EXPLAIN indexes = 1
SELECT * FROM events WHERE value IN (7, 42, 99);
```
N-gram Bloom filter (ngrambf_v1) for substring search {#n-gram-bloom-filter-ngrambf-v1-for-substring-search}
The
ngrambf_v1
index splits strings into n-grams. It works well for
LIKE '%...%'
queries. It supports String/FixedString/Map (via mapKeys/mapValues), as well as tunable size, hash count, and seed. See the documentation for
N-gram bloom filter
for further details.
```sql
-- Create index for substring search
ALTER TABLE logs ADD INDEX msg_ngram msg TYPE ngrambf_v1(3, 10000, 3, 7) GRANULARITY 1;
ALTER TABLE logs MATERIALIZE INDEX msg_ngram;
-- Substring search
SELECT count() FROM logs WHERE msg LIKE '%timeout%';
EXPLAIN indexes = 1
SELECT count() FROM logs WHERE msg LIKE '%timeout%';
```
This guide
shows practical examples and when to use token vs ngram.
Parameter optimization helpers:
The four ngrambf_v1 parameters (n-gram size, bitmap size, hash functions, seed) significantly impact performance and memory usage. Use these functions to calculate optimal bitmap size and hash function count based on your expected n-gram volume and desired false positive rate:
```sql
CREATE FUNCTION bfEstimateFunctions AS
(total_grams, bits) -> round((bits / total_grams) * log(2));
CREATE FUNCTION bfEstimateBmSize AS
(total_grams, p_false) -> ceil((total_grams * log(p_false)) / log(1 / pow(2, log(2))));
-- Example sizing for 4300 ngrams, p_false = 0.0001
SELECT bfEstimateBmSize(4300, 0.0001) / 8 AS size_bytes; -- ~10304
SELECT bfEstimateFunctions(4300, bfEstimateBmSize(4300, 0.0001)) AS k; -- ~13
```
See
parameter docs
for complete tuning guidance.
Token Bloom filter (tokenbf_v1) for word-based search {#token-bloom-filter-tokenbf-v1-for-word-based-search}
tokenbf_v1
indexes tokens separated by non-alphanumeric characters. You should use it with
hasToken
,
LIKE
word patterns or equals/IN. It supports
String
/
FixedString
/
Map
types.
See
Token bloom filter
and
Bloom filter types
pages for more details.
```sql
ALTER TABLE logs ADD INDEX msg_token lower(msg) TYPE tokenbf_v1(10000, 7, 7) GRANULARITY 1;
ALTER TABLE logs MATERIALIZE INDEX msg_token;
-- Word search (case-insensitive via lower)
SELECT count() FROM logs WHERE hasToken(lower(msg), 'exception');
EXPLAIN indexes = 1
SELECT count() FROM logs WHERE hasToken(lower(msg), 'exception');
```
See observability examples and guidance on token vs ngram
here
.
Add indexes during CREATE TABLE (multiple examples) {#add-indexes-during-create-table-multiple-examples}
Skipping indexes also support composite expressions and
Map
/
Tuple
/
Nested
types. This is demonstrated in the example below:
```sql
CREATE TABLE t
(
u64 UInt64,
s String,
m Map(String, String), | {"source_file": "skipping-indexes-examples.md"} | [
0.005593861918896437,
0.014381765387952328,
0.009681930765509605,
0.059326689690351486,
-0.01715361885726452,
0.07211934030056,
0.08799733966588974,
0.0335458368062973,
0.017215188592672348,
0.001450696843676269,
-0.05172949284315109,
0.00845731794834137,
-0.01033847127109766,
-0.039753206... |
777f6a07-3f3c-4ee7-ba2e-9323ff6f12e2 | ```sql
CREATE TABLE t
(
u64 UInt64,
s String,
m Map(String, String),
INDEX idx_bf u64 TYPE bloom_filter(0.01) GRANULARITY 3,
INDEX idx_minmax u64 TYPE minmax GRANULARITY 1,
INDEX idx_set u64 * length(s) TYPE set(1000) GRANULARITY 4,
INDEX idx_ngram s TYPE ngrambf_v1(3, 10000, 3, 7) GRANULARITY 1,
INDEX idx_token mapKeys(m) TYPE tokenbf_v1(10000, 7, 7) GRANULARITY 1
)
ENGINE = MergeTree
ORDER BY u64;
```
Materializing on existing data and verifying {#materializing-on-existing-data-and-verifying}
You can add an index to existing data parts using
MATERIALIZE
, and inspect pruning with
EXPLAIN
or trace logs, as shown below:
```sql
ALTER TABLE t MATERIALIZE INDEX idx_bf;
EXPLAIN indexes = 1
SELECT count() FROM t WHERE u64 IN (123, 456);
-- Optional: detailed pruning info
SET send_logs_level = 'trace';
```
This
worked minmax example
demonstrates EXPLAIN output structure and pruning counts.
When to use and when to avoid skipping indexes {#when-use-and-when-to-avoid}
Use skip indexes when:
Filter values are sparse within data blocks
Strong correlation exists with
ORDER BY
columns or data ingestion patterns group similar values together
Performing text searches on large log datasets (
ngrambf_v1
/
tokenbf_v1
types)
Avoid skip indexes when:
Most blocks likely contain at least one matching value (blocks will be read regardless)
Filtering on high-cardinality columns with no correlation to data ordering
:::note Important considerations
If a value appears even once in a data block, ClickHouse must read the entire block. Test indexes with realistic datasets and adjust granularity and type-specific parameters based on actual performance measurements.
:::
Temporarily ignore or force indexes {#temporarily-ignore-or-force-indexes}
Disable specific indexes by name for individual queries during testing and troubleshooting. Settings also exist to force index usage when needed. See
ignore_data_skipping_indices
.
sql
-- Ignore an index by name
SELECT * FROM logs
WHERE hasToken(lower(msg), 'exception')
SETTINGS ignore_data_skipping_indices = 'msg_token';
Notes and caveats {#notes-and-caveats}
Skipping indexes are only supported on
MergeTree-family tables
; pruning happens at the granule/block level.
Bloom-filter-based indexes are probabilistic (false positives cause extra reads but won't skip valid data).
Bloom filters and other skip indexes should be validated with
EXPLAIN
and tracing; adjust granularity to balance pruning vs. index size.
Related docs {#related-docs}
Data skipping index guide
Best practices guide
Manipulating data skipping indices
System table information | {"source_file": "skipping-indexes-examples.md"} | [
0.021607523784041405,
0.029821548610925674,
0.028927145525813103,
-0.007774892263114452,
-0.005909481551498175,
-0.04239964485168457,
0.028281062841415405,
0.023508155718445778,
-0.0856500193476677,
0.02014225535094738,
-0.007601824589073658,
0.005930731073021889,
0.02152036316692829,
-0.0... |
221dd966-1df5-41fe-a3ac-d4f0f6deffa6 | slug: /optimize/asynchronous-inserts
sidebar_label: 'Asynchronous Inserts'
title: 'Asynchronous Inserts (async_insert)'
description: 'Use asynchronous inserts as an alternative to batching data.'
doc_type: 'guide'
keywords: ['asynchronous inserts', 'async_insert', 'best practices', 'batching data', 'performance optimization']
import Content from '@site/docs/best-practices/_snippets/_async_inserts.md'; | {"source_file": "asyncinserts.md"} | [
-0.03953497111797333,
0.051444217562675476,
-0.09483049809932709,
0.1295595020055771,
-0.04950787499547005,
-0.01915283128619194,
-0.04419364780187607,
0.05124066397547722,
-0.06284795701503754,
-0.020456861704587936,
0.06376907229423523,
0.059822928160429,
0.079768106341362,
-0.0782319679... |
2cf04327-f157-4465-bc17-21c63db60ede | slug: '/examples/aggregate-function-combinators/uniqArrayIf'
title: 'uniqArrayIf'
description: 'Example of using the uniqArrayIf combinator'
keywords: ['uniq', 'array', 'if', 'combinator', 'examples', 'uniqArrayIf']
sidebar_label: 'uniqArrayIf'
doc_type: 'reference'
uniqArrayIf {#uniqarrayif}
Description {#description}
The
Array
and
If
combinators can be applied to the
uniq
function to count the number of unique values in arrays for rows where the
condition is true, using the
uniqArrayIf
aggregate combinator function.
:::note
-
If
and -
Array
can be combined. However,
Array
must come first, then
If
.
:::
This is useful when you want to count unique elements in an array based on
specific conditions without having to use
arrayJoin
.
Example usage {#example-usage}
Count unique products viewed by segment type and engagement level {#count-unique-products}
In this example, we'll use a table with user shopping session data to count the
number of unique products viewed by users of a specific user segment and with
an engagement metric of time spent in the session.
```sql title="Query"
CREATE TABLE user_shopping_sessions
(
session_date Date,
user_segment String,
viewed_products Array(String),
session_duration_minutes Int32
) ENGINE = Memory;
INSERT INTO user_shopping_sessions VALUES
('2024-01-01', 'new_customer', ['smartphone_x', 'headphones_y', 'smartphone_x'], 12),
('2024-01-01', 'returning', ['laptop_z', 'smartphone_x', 'tablet_a'], 25),
('2024-01-01', 'new_customer', ['smartwatch_b', 'headphones_y', 'fitness_tracker'], 8),
('2024-01-02', 'returning', ['laptop_z', 'external_drive', 'laptop_z'], 30),
('2024-01-02', 'new_customer', ['tablet_a', 'keyboard_c', 'tablet_a'], 15),
('2024-01-02', 'premium', ['smartphone_x', 'smartwatch_b', 'headphones_y'], 22);
-- Count unique products viewed by segment type and engagement level
SELECT
session_date,
-- Count unique products viewed in long sessions by new customers
uniqArrayIf(viewed_products, user_segment = 'new_customer' AND session_duration_minutes > 10) AS new_customer_engaged_products,
-- Count unique products viewed by returning customers
uniqArrayIf(viewed_products, user_segment = 'returning') AS returning_customer_products,
-- Count unique products viewed across all sessions
uniqArray(viewed_products) AS total_unique_products
FROM user_shopping_sessions
GROUP BY session_date
ORDER BY session_date
FORMAT Vertical;
```
```response title="Response"
Row 1:
ββββββ
session_date: 2024-01-01
new_customerβ―ed_products: 2
returning_customer_products: 3
total_unique_products: 6
Row 2:
ββββββ
session_date: 2024-01-02
new_customerβ―ed_products: 2
returning_customer_products: 2
total_unique_products: 7
```
See also {#see-also}
uniq
Array combinator
If combinator | {"source_file": "uniqArrayIf.md"} | [
-0.0010597174987196922,
0.004011173732578754,
-0.014808060601353645,
0.046186160296201706,
-0.027381308376789093,
0.08606909960508347,
0.10078056156635284,
0.00392708508297801,
0.04705769196152687,
-0.058379653841257095,
0.024850420653820038,
-0.0025529905688017607,
0.1404641717672348,
0.0... |
9a827383-11e1-42bb-9517-827445fef989 | slug: '/examples/aggregate-function-combinators/sumSimpleState'
title: 'sumSimpleState'
description: 'Example of using the sumSimpleState combinator'
keywords: ['sum', 'state', 'simple', 'combinator', 'examples', 'sumSimpleState']
sidebar_label: 'sumSimpleState'
doc_type: 'reference'
sumSimpleState {#sumsimplestate}
Description {#description}
The
SimpleState
combinator can be applied to the
sum
function to return the sum across all input values. It returns the result with
type
SimpleAggregateFunction
.
Example usage {#example-usage}
Tracking upvotes and downvotes {#tracking-post-votes}
Let's look at a practical example using a table that tracks votes on posts.
For each post, we want to maintain running totals of upvotes, downvotes, and an
overall score. Using the
SimpleAggregateFunction
type with sum is suited for
this use case as we only need to store the running totals, not the entire state
of the aggregation. As a result, it will be faster and will not require merging
of partial aggregate states.
First, we create a table for the raw data:
sql title="Query"
CREATE TABLE raw_votes
(
post_id UInt32,
vote_type Enum8('upvote' = 1, 'downvote' = -1)
)
ENGINE = MergeTree()
ORDER BY post_id;
Next, we create a target table which will store the aggregated data:
sql
CREATE TABLE vote_aggregates
(
post_id UInt32,
upvotes SimpleAggregateFunction(sum, UInt64),
downvotes SimpleAggregateFunction(sum, UInt64),
score SimpleAggregateFunction(sum, Int64)
)
ENGINE = AggregatingMergeTree()
ORDER BY post_id;
We then create a materialized view with
SimpleAggregateFunction
type columns:
sql
CREATE MATERIALIZED VIEW mv_vote_processor TO vote_aggregates
AS
SELECT
post_id,
-- Initial value for sum state (1 if upvote, 0 otherwise)
toUInt64(vote_type = 'upvote') AS upvotes,
-- Initial value for sum state (1 if downvote, 0 otherwise)
toUInt64(vote_type = 'downvote') AS downvotes,
-- Initial value for sum state (1 for upvote, -1 for downvote)
toInt64(vote_type) AS score
FROM raw_votes;
Insert sample data:
sql
INSERT INTO raw_votes VALUES
(1, 'upvote'),
(1, 'upvote'),
(1, 'downvote'),
(2, 'upvote'),
(2, 'downvote'),
(3, 'downvote');
Query the materialized view using the
SimpleState
combinator:
sql
SELECT
post_id,
sum(upvotes) AS total_upvotes,
sum(downvotes) AS total_downvotes,
sum(score) AS total_score
FROM vote_aggregates -- Query the target table
GROUP BY post_id
ORDER BY post_id ASC;
response
ββpost_idββ¬βtotal_upvotesββ¬βtotal_downvotesββ¬βtotal_scoreββ
β 1 β 2 β 1 β 1 β
β 2 β 1 β 1 β 0 β
β 3 β 0 β 1 β -1 β
βββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββ
See also {#see-also}
sum
SimpleState combinator
SimpleAggregateFunction type | {"source_file": "sumSimpleState.md"} | [
-0.09811026602983475,
-0.001073782448656857,
-0.01046434510499239,
0.05407912656664848,
-0.017519893124699593,
0.0811958760023117,
0.044268060475587845,
0.05085968226194382,
-0.007909854874014854,
-0.005559615325182676,
-0.04818800464272499,
-0.05801807716488838,
0.052257828414440155,
0.00... |
fdf17cf9-9672-4cfb-abd3-c7a3fdaf0390 | slug: '/examples/aggregate-function-combinators/avgMerge'
title: 'avgMerge'
description: 'Example of using the avgMerge combinator'
keywords: ['avg', 'merge', 'combinator', 'examples', 'avgMerge']
sidebar_label: 'avgMerge'
doc_type: 'reference'
avgMerge {#avgMerge}
Description {#description}
The
Merge
combinator
can be applied to the
avg
function to produce a final result by combining partial aggregate states.
Example usage {#example-usage}
The
Merge
combinator is closely related to the
State
combinator. Refer to
"avgState example usage"
for an example of both
avgMerge
and
avgState
.
See also {#see-also}
avg
Merge
MergeState | {"source_file": "avgMerge.md"} | [
-0.04121572524309158,
0.011534501798450947,
0.04372968524694443,
0.05847730487585068,
-0.018030548468232155,
0.0376821905374527,
0.06280435621738434,
0.02510455995798111,
-0.011274863965809345,
-0.018767980858683586,
-0.000605533248744905,
-0.021208327263593674,
0.010464679449796677,
-0.02... |
00b08df8-8741-4b78-90cd-ec00357e818b | slug: '/examples/aggregate-function-combinators/uniqArray'
title: 'uniqArray'
description: 'Example of using the uniqArray combinator'
keywords: ['uniq', 'array', 'combinator', 'examples', 'uniqArray']
sidebar_label: 'uniqArray'
doc_type: 'reference'
uniqArray {#uniqarray}
Description {#description}
The
Array
combinator
can be applied to the
uniq
function to calculate the approximate number of unique elements across all arrays,
using the
uniqArray
aggregate combinator function.
The
uniqArray
function is useful when you need to count unique elements across
multiple arrays in a dataset. It's equivalent to using
uniq(arrayJoin())
, where
arrayJoin
first flattens the arrays and then
uniq
counts the unique elements.
Example usage {#example-usage}
In this example, we'll use a sample dataset of user interests across different
categories to demonstrate how
uniqArray
works. We'll compare it with
uniq(arrayJoin())
to show the difference in counting unique elements.
```sql title="Query"
CREATE TABLE user_interests
(
user_id UInt32,
interests Array(String)
) ENGINE = Memory;
INSERT INTO user_interests VALUES
(1, ['reading', 'gaming', 'music']),
(2, ['gaming', 'sports', 'music']),
(3, ['reading', 'cooking']);
SELECT
uniqArray(interests) AS unique_interests_total,
uniq(arrayJoin(interests)) AS unique_interests_arrayJoin
FROM user_interests;
```
The
uniqArray
function counts unique elements across all arrays combined, similar to
uniq(arrayJoin())
.
In this example:
-
uniqArray
returns 5 because there are 5 unique interests across all users: 'reading', 'gaming', 'music', 'sports', 'cooking'
-
uniq(arrayJoin())
also returns 5, showing that both functions count unique elements across all arrays
response title="Response"
ββunique_interests_totalββ¬βunique_interests_arrayJoinββ
1. β 5 β 5 β
ββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββ
See also {#see-also}
uniq
arrayJoin
Array combinator
uniqCombined | {"source_file": "uniqArray.md"} | [
0.005837088450789452,
0.00039517597178928554,
-0.03232084959745407,
0.02217342145740986,
-0.02748902514576912,
-0.01091686636209488,
0.07838691025972366,
0.00021855415252503008,
0.024687932804226875,
-0.07307783514261246,
0.0182868130505085,
0.02261737920343876,
0.12054786086082458,
-0.023... |
048929d9-5e6e-4f08-afdf-5c65f22b7452 | slug: '/examples/aggregate-function-combinators/sumIf'
title: 'sumIf'
description: 'Example of using the sumIf combinator'
keywords: ['sum', 'if', 'combinator', 'examples', 'sumIf']
sidebar_label: 'sumIf'
doc_type: 'reference'
sumIf {#sumif}
Description {#description}
The
If
combinator can be applied to the
sum
function to calculate the sum of values for rows where the condition is true,
using the
sumIf
aggregate combinator function.
Example usage {#example-usage}
In this example, we'll create a table that stores sales data with success flags,
and we'll use
sumIf
to calculate the total sales amount for successful transactions.
```sql title="Query"
CREATE TABLE sales(
transaction_id UInt32,
amount Decimal(10,2),
is_successful UInt8
) ENGINE = Log;
INSERT INTO sales VALUES
(1, 100.50, 1),
(2, 200.75, 1),
(3, 150.25, 0),
(4, 300.00, 1),
(5, 250.50, 0),
(6, 175.25, 1);
SELECT
sumIf(amount, is_successful = 1) AS total_successful_sales
FROM sales;
```
The
sumIf
function will sum only the amounts where
is_successful = 1
.
In this case, it will sum: 100.50 + 200.75 + 300.00 + 175.25.
response title="Response"
ββtotal_successful_salesββ
1. β 776.50 β
βββββββββββββββββββββββββ
Calculate trading volume by price direction {#calculate-trading-vol-price-direction}
In this example we'll use the
stock
table available at
ClickHouse playground
to calculate trading volume by price direction in the first half of the year 2002.
sql title="Query"
SELECT
toStartOfMonth(date) AS month,
formatReadableQuantity(sumIf(volume, price > open)) AS volume_on_up_days,
formatReadableQuantity(sumIf(volume, price < open)) AS volume_on_down_days,
formatReadableQuantity(sumIf(volume, price = open)) AS volume_on_neutral_days,
formatReadableQuantity(sum(volume)) AS total_volume
FROM stock.stock
WHERE date BETWEEN '2002-01-01' AND '2002-12-31'
GROUP BY month
ORDER BY month; | {"source_file": "sumIf.md"} | [
-0.03833799436688423,
0.03717745468020439,
-0.007202391512691975,
0.016695570200681686,
-0.08572235703468323,
0.041634395718574524,
0.10724872350692749,
0.09529362618923187,
0.04652319476008415,
0.03965085372328758,
0.03522009029984474,
-0.08922816812992096,
0.06267845630645752,
-0.0224341... |
c606f636-df08-479d-a5a5-81038868635a | markdown
βββββββmonthββ¬βvolume_on_up_daysββ¬βvolume_on_down_daysββ¬βvolume_on_neutral_daysββ¬βtotal_volumeβββ
1. β 2002-01-01 β 26.07 billion β 30.74 billion β 781.80 million β 57.59 billion β
2. β 2002-02-01 β 20.84 billion β 29.60 billion β 642.36 million β 51.09 billion β
3. β 2002-03-01 β 28.81 billion β 23.57 billion β 762.60 million β 53.14 billion β
4. β 2002-04-01 β 24.72 billion β 30.99 billion β 763.92 million β 56.47 billion β
5. β 2002-05-01 β 25.09 billion β 30.57 billion β 858.57 million β 56.52 billion β
6. β 2002-06-01 β 29.10 billion β 30.88 billion β 875.71 million β 60.86 billion β
7. β 2002-07-01 β 32.27 billion β 41.73 billion β 747.32 million β 74.75 billion β
8. β 2002-08-01 β 28.57 billion β 27.49 billion β 1.17 billion β 57.24 billion β
9. β 2002-09-01 β 23.37 billion β 31.02 billion β 775.66 million β 55.17 billion β
10. β 2002-10-01 β 38.57 billion β 34.05 billion β 956.48 million β 73.57 billion β
11. β 2002-11-01 β 34.90 billion β 25.47 billion β 998.34 million β 61.37 billion β
12. β 2002-12-01 β 22.99 billion β 28.65 billion β 1.14 billion β 52.79 billion β
ββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββ
Calculate trading volume by stock symbol {#calculate-trading-volume}
In this example we'll use the
stock
table available at
ClickHouse playground
to calculate trading volume by stock symbol in 2006 for three of the largest tech
companies at the time.
sql title="Query"
SELECT
toStartOfMonth(date) AS month,
formatReadableQuantity(sumIf(volume, symbol = 'AAPL')) AS apple_volume,
formatReadableQuantity(sumIf(volume, symbol = 'MSFT')) AS microsoft_volume,
formatReadableQuantity(sumIf(volume, symbol = 'GOOG')) AS google_volume,
sum(volume) AS total_volume,
round(sumIf(volume, symbol IN ('AAPL', 'MSFT', 'GOOG')) / sum(volume) * 100, 2) AS major_tech_percentage
FROM stock.stock
WHERE date BETWEEN '2006-01-01' AND '2006-12-31'
GROUP BY month
ORDER BY month; | {"source_file": "sumIf.md"} | [
0.017664149403572083,
-0.010191491805016994,
-0.021433023735880852,
0.03038734570145607,
0.03501658886671066,
-0.09739507734775543,
0.019473925232887268,
0.029399007558822632,
0.0004519362119026482,
0.08858555555343628,
0.06755142658948898,
-0.031102189794182777,
0.05058487132191658,
-0.01... |
f1cf4f2e-4281-4017-a599-7525363f6825 | markdown title="Response"
βββββββmonthββ¬βapple_volumeββββ¬βmicrosoft_volumeββ¬βgoogle_volumeβββ¬βtotal_volumeββ¬βmajor_tech_percentageββ
1. β 2006-01-01 β 782.21 million β 1.39 billion β 299.69 million β 84343937700 β 2.93 β
2. β 2006-02-01 β 670.38 million β 1.05 billion β 297.65 million β 73524748600 β 2.74 β
3. β 2006-03-01 β 744.85 million β 1.39 billion β 288.36 million β 87960830800 β 2.75 β
4. β 2006-04-01 β 718.97 million β 1.45 billion β 185.65 million β 78031719800 β 3.02 β
5. β 2006-05-01 β 557.89 million β 2.32 billion β 174.94 million β 97096584100 β 3.14 β
6. β 2006-06-01 β 641.48 million β 1.98 billion β 142.55 million β 96304086800 β 2.87 β
7. β 2006-07-01 β 624.93 million β 1.33 billion β 127.74 million β 79940921800 β 2.61 β
8. β 2006-08-01 β 639.35 million β 1.13 billion β 107.16 million β 84251753200 β 2.23 β
9. β 2006-09-01 β 633.45 million β 1.10 billion β 121.72 million β 82775234300 β 2.24 β
10. β 2006-10-01 β 514.82 million β 1.29 billion β 158.90 million β 93406712600 β 2.1 β
11. β 2006-11-01 β 494.37 million β 1.24 billion β 118.49 million β 90177365500 β 2.06 β
12. β 2006-12-01 β 603.95 million β 1.14 billion β 91.77 million β 80499584100 β 2.28 β
ββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββββββββ
See also {#see-also}
sum
If combinator | {"source_file": "sumIf.md"} | [
-0.0007716430118307471,
0.03141718730330467,
0.00043811716022901237,
0.043482400476932526,
-0.00904170610010624,
-0.07876048982143402,
0.020769914612174034,
0.05937454476952553,
0.02742280252277851,
0.05945782735943794,
0.019461331889033318,
-0.040292706340551376,
0.07566133886575699,
-0.0... |
0d4780dd-49c6-4d5c-9825-8d8f613ff492 | slug: '/examples/aggregate-function-combinators/quantilesTimingArrayIf'
title: 'quantilesTimingArrayIf'
description: 'Example of using the quantilesTimingArrayIf combinator'
keywords: ['quantilesTiming', 'array', 'if', 'combinator', 'examples', 'quantilesTimingArrayIf']
sidebar_label: 'quantilesTimingArrayIf'
doc_type: 'reference'
quantilesTimingArrayIf {#quantilestimingarrayif}
Description {#description}
The
Array
and
If
combinator can be applied to the
quantilesTiming
function to calculate quantiles of timing values in arrays for rows where the condition is true,
using the
quantilesTimingArrayIf
aggregate combinator function.
Example usage {#example-usage}
In this example, we'll create a table that stores API response times for different endpoints,
and we'll use
quantilesTimingArrayIf
to calculate response time quantiles for successful requests.
```sql title="Query"
CREATE TABLE api_responses(
endpoint String,
response_times_ms Array(UInt32),
success_rate Float32
) ENGINE = Log;
INSERT INTO api_responses VALUES
('orders', [82, 94, 98, 87, 103, 92, 89, 105], 0.98),
('products', [45, 52, 48, 51, 49, 53, 47, 50], 0.95),
('users', [120, 125, 118, 122, 121, 119, 123, 124], 0.92);
SELECT
endpoint,
quantilesTimingArrayIf(0, 0.25, 0.5, 0.75, 0.95, 0.99, 1.0)(response_times_ms, success_rate >= 0.95) as response_time_quantiles
FROM api_responses
GROUP BY endpoint;
```
The
quantilesTimingArrayIf
function will calculate quantiles only for endpoints with a success rate above 95%.
The returned array contains the following quantiles in order:
- 0 (minimum)
- 0.25 (first quartile)
- 0.5 (median)
- 0.75 (third quartile)
- 0.95 (95th percentile)
- 0.99 (99th percentile)
- 1.0 (maximum)
response title="Response"
ββendpointββ¬βresponse_time_quantilesββββββββββββββββββββββββββββββββββββββββββββββ
1. β orders β [82, 87, 92, 98, 103, 104, 105] β
2. β products β [45, 47, 49, 51, 52, 52, 53] β
3. β users β [nan, nan, nan, nan, nan, nan, nan] β
ββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See also {#see-also}
quantilesTiming
If combinator | {"source_file": "quantilesTimingArrayIf.md"} | [
-0.042661286890506744,
0.04176223278045654,
-0.0336357057094574,
0.029910597950220108,
-0.08338414132595062,
-0.018817074596881866,
0.047837793827056885,
0.022791795432567596,
0.01113581471145153,
-0.028003837913274765,
0.0017348098335787654,
-0.06978225708007812,
0.0056218840181827545,
-0... |
f49ca267-1b9d-4891-a551-ab34aa29d0da | slug: '/examples/aggregate-function-combinators/avgResample'
title: 'avgResample'
description: 'Example of using the Resample combinator with avg'
keywords: ['avg', 'Resample', 'combinator', 'examples', 'avgResample']
sidebar_label: 'avgResample'
doc_type: 'reference'
countResample {#countResample}
Description {#description}
The
Resample
combinator can be applied to the
count
aggregate function to count values of a specified key column in a fixed number
of intervals (
N
).
Example usage {#example-usage}
Basic example {#basic-example}
Let's look at an example. We'll create a table which contains the
name
,
age
and
wage
of employees, and we'll insert some data into it:
```sql
CREATE TABLE employee_data
(
name String,
age UInt8,
wage Float32
)
ENGINE = MergeTree()
ORDER BY tuple()
INSERT INTO employee_data (name, age, wage) VALUES
('John', 16, 10.0),
('Alice', 30, 15.0),
('Mary', 35, 8.0),
('Evelyn', 48, 11.5),
('David', 62, 9.9),
('Brian', 60, 16.0);
```
Let's get the average wage of the people whose age lies in the intervals of
[30,60)
and
[60,75)
(
[
is exclusive and
)
is inclusive). Since we use integer
representation for age, we get ages in the intervals
[30, 59]
and
[60,74]
.
To do so we apply the
Resample
combinator to the
avg
aggregate function.
sql
WITH avg_wage AS
(
SELECT avgResample(30, 75, 30)(wage, age) AS original_avg_wage
FROM employee_data
)
SELECT
arrayMap(x -> round(x, 3), original_avg_wage) AS avg_wage_rounded
FROM avg_wage;
response
ββavg_wage_roundedββ
β [11.5,12.95] β
ββββββββββββββββββββ
See also {#see-also}
count
Resample combinator | {"source_file": "avgResample.md"} | [
-0.038262076675891876,
0.001977404346689582,
0.020670272409915924,
0.038601282984018326,
-0.10059691220521927,
0.01569153554737568,
0.02923034504055977,
0.06490831077098846,
-0.023359134793281555,
-0.013853429816663265,
-0.014706493355333805,
-0.024910634383559227,
0.04811261594295502,
-0.... |
7197fbeb-da9f-437c-a6f1-cd34194fe999 | slug: '/examples/aggregate-function-combinators/countIf'
title: 'countIf'
description: 'Example of using the countIf combinator'
keywords: ['count', 'if', 'combinator', 'examples', 'countIf']
sidebar_label: 'countIf'
doc_type: 'reference'
countIf {#countif}
Description {#description}
The
If
combinator can be applied to the
count
function to count the number of rows where the condition is true,
using the
countIf
aggregate combinator function.
Example usage {#example-usage}
In this example, we'll create a table that stores user login attempts,
and we'll use
countIf
to count the number of successful logins.
```sql title="Query"
CREATE TABLE login_attempts(
user_id UInt32,
timestamp DateTime,
is_successful UInt8
) ENGINE = Log;
INSERT INTO login_attempts VALUES
(1, '2024-01-01 10:00:00', 1),
(1, '2024-01-01 10:05:00', 0),
(1, '2024-01-01 10:10:00', 1),
(2, '2024-01-01 11:00:00', 1),
(2, '2024-01-01 11:05:00', 1),
(2, '2024-01-01 11:10:00', 0);
SELECT
user_id,
countIf(is_successful = 1) AS successful_logins
FROM login_attempts
GROUP BY user_id;
```
The
countIf
function will count only the rows where
is_successful = 1
for each user.
response title="Response"
ββuser_idββ¬βsuccessful_loginsββ
1. β 1 β 2 β
2. β 2 β 2 β
βββββββββββ΄ββββββββββββββββββββ
See also {#see-also}
count
If combinator | {"source_file": "countIf.md"} | [
0.010116915218532085,
0.008647819980978966,
0.015050348825752735,
0.05178871750831604,
-0.08039022237062454,
0.06317058205604553,
0.08556748926639557,
0.09655016660690308,
0.07358287274837494,
0.01089206151664257,
0.06957290321588516,
-0.08981063961982727,
0.123927041888237,
-0.00257162912... |
7342e93e-7f17-4e31-83d3-9df499d96e64 | slug: '/examples/aggregate-function-combinators/avgMap'
title: 'avgMap'
description: 'Example of using the avgMap combinator'
keywords: ['avg', 'map', 'combinator', 'examples', 'avgMap']
sidebar_label: 'avgMap'
doc_type: 'reference'
avgMap {#avgmap}
Description {#description}
The
Map
combinator can be applied to the
avg
function to calculate the arithmetic mean of values in a Map according to each key, using the
avgMap
aggregate combinator function.
Example usage {#example-usage}
In this example, we'll create a table that stores status codes and their counts for different timeslots,
where each row contains a Map of status codes to their corresponding counts. We'll use
avgMap
to calculate the average count for each status code within each timeslot.
```sql title="Query"
CREATE TABLE metrics(
date Date,
timeslot DateTime,
status Map(String, UInt64)
) ENGINE = Log;
INSERT INTO metrics VALUES
('2000-01-01', '2000-01-01 00:00:00', (['a', 'b', 'c'], [15, 25, 35])),
('2000-01-01', '2000-01-01 00:00:00', (['c', 'd', 'e'], [45, 55, 65])),
('2000-01-01', '2000-01-01 00:01:00', (['d', 'e', 'f'], [75, 85, 95])),
('2000-01-01', '2000-01-01 00:01:00', (['f', 'g', 'g'], [105, 115, 125]));
SELECT
timeslot,
avgMap(status),
FROM metrics
GROUP BY timeslot;
```
The
avgMap
function will calculate the average count for each status code within each timeslot. For example:
- In timeslot '2000-01-01 00:00:00':
- Status 'a': 15
- Status 'b': 25
- Status 'c': (35 + 45) / 2 = 40
- Status 'd': 55
- Status 'e': 65
- In timeslot '2000-01-01 00:01:00':
- Status 'd': 75
- Status 'e': 85
- Status 'f': (95 + 105) / 2 = 100
- Status 'g': (115 + 125) / 2 = 120
response title="Response"
βββββββββββββtimeslotββ¬βavgMap(status)ββββββββββββββββββββββββ
1. β 2000-01-01 00:01:00 β {'d':75,'e':85,'f':100,'g':120} β
2. β 2000-01-01 00:00:00 β {'a':15,'b':25,'c':40,'d':55,'e':65} β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ
See also {#see-also}
avg
Map combinator | {"source_file": "avgMap.md"} | [
0.010913504287600517,
0.0035193904768675566,
-0.0074373590759932995,
0.04380396753549576,
-0.10084350407123566,
-0.005288496147841215,
0.06692703068256378,
0.07411015033721924,
-0.014008406549692154,
0.06665793806314468,
0.013227598741650581,
-0.06728856265544891,
0.07505639642477036,
-0.0... |
0b42eca6-f1b9-48f4-a5fc-88f032e0d975 | slug: '/examples/aggregate-function-combinators/groupArrayDistinct'
title: 'groupArrayDistinct'
description: 'Example of using the groupArrayDistinct combinator'
keywords: ['groupArray', 'Distinct', 'combinator', 'examples', 'groupArrayDistinct']
sidebar_label: 'groupArrayDistinct'
doc_type: 'reference'
groupArrayDistinct {#sumdistinct}
Description {#description}
The
groupArrayDistinct
combinator
can be applied to the
groupArray
aggregate function to create an array
of distinct argument values.
Example usage {#example-usage}
For this example we'll make use of the
hits
dataset available in our
SQL playground
.
Imagine you want to find out, for each distinct landing page domain (
URLDomain
)
on your website, what are all the unique User Agent OS codes (
OS
) recorded for
visitors landing on that domain. This could help you understand the variety of
operating systems interacting with different parts of your site.
sql runnable
SELECT
URLDomain,
groupArrayDistinct(OS) AS distinct_os_codes
FROM metrica.hits_v1
WHERE URLDomain != '' -- Consider only hits with a recorded domain
GROUP BY URLDomain
ORDER BY URLDomain ASC
LIMIT 20;
See also {#see-also}
groupArray
Distinct combinator | {"source_file": "groupArrayDistinct.md"} | [
0.022396190091967583,
-0.019927682355046272,
-0.09455271810293198,
0.041815489530563354,
-0.07371890544891357,
-0.0268209557980299,
0.0847579762339592,
-0.019693676382303238,
0.018338164314627647,
-0.05288953706622124,
-0.013186750002205372,
0.0026404461823403835,
0.09927337616682053,
-0.0... |
82ccfcdb-1a39-4cb0-b9fa-f4344d080842 | slug: '/examples/aggregate-function-combinators/sumArray'
title: 'sumArray'
description: 'Example of using the sumArray combinator'
keywords: ['sum', 'array', 'combinator', 'examples', 'sumArray']
sidebar_label: 'sumArray'
doc_type: 'reference'
sumArray {#sumarray}
Description {#description}
The
Array
combinator
can be applied to the
sum
function to calculate the sum of all elements in an array, using the
sumArray
aggregate combinator function.
The
sumArray
function is useful when you need to calculate the total sum of
all elements across multiple arrays in a dataset.
Example usage {#example-usage}
In this example, we'll use a sample dataset of daily sales across different
product categories to demonstrate how
sumArray
works. We'll calculate the total
sales across all categories for each day.
```sql title="Query"
CREATE TABLE daily_category_sales
(
date Date,
category_sales Array(UInt32)
) ENGINE = Memory;
INSERT INTO daily_category_sales VALUES
('2024-01-01', [100, 200, 150]),
('2024-01-02', [120, 180, 160]),
('2024-01-03', [90, 220, 140]);
SELECT
date,
category_sales,
sumArray(category_sales) AS total_sales_sumArray,
sum(arraySum(category_sales)) AS total_sales_arraySum
FROM daily_category_sales
GROUP BY date, category_sales;
```
The
sumArray
function will sum up all elements in each
category_sales
array.
For example, on
2024-01-01
, it sums
100 + 200 + 150 = 450
. This gives the
same result as
arraySum
.
See also {#see-also}
sum
arraySum
Array combinator
sumMap | {"source_file": "sumArray.md"} | [
-0.002363083651289344,
0.03491871431469917,
-0.017952054738998413,
0.057469893246889114,
-0.09362787008285522,
-0.003241335740312934,
0.06790363788604736,
0.043078720569610596,
-0.004028339870274067,
-0.02859816886484623,
-0.022341609001159668,
-0.026691444218158722,
0.043100763112306595,
... |
e51d5264-79ec-45dc-85c4-b2ccdec62447 | slug: '/examples/aggregate-function-combinators/anyIf'
title: 'anyIf'
description: 'Example of using the anyIf combinator'
keywords: ['any', 'if', 'combinator', 'examples', 'anyIf']
sidebar_label: 'anyIf'
doc_type: 'reference'
anyIf {#avgif}
Description {#description}
The
If
combinator can be applied to the
any
aggregate function to select the first encountered element from a given column
that matches the given condition.
Example usage {#example-usage}
In this example, we'll create a table that stores sales data with success flags,
and we'll use
anyIf
to select the first
transaction_id
s which are above and
below an amount of 200.
We first create a table and insert data into it:
```sql title="Query"
CREATE TABLE sales(
transaction_id UInt32,
amount Decimal(10,2),
is_successful UInt8
)
ENGINE = MergeTree()
ORDER BY tuple();
INSERT INTO sales VALUES
(1, 100.00, 1),
(2, 150.00, 1),
(3, 155.00, 0),
(4, 300.00, 1),
(5, 250.50, 0),
(6, 175.25, 1);
```
sql
SELECT
anyIf(transaction_id, amount < 200) AS tid_lt_200,
anyIf(transaction_id, amount > 200) AS tid_gt_200
FROM sales;
response title="Response"
ββtid_lt_200ββ¬βtid_gt_200ββ
β 1 β 4 β
ββββββββββββββ΄βββββββββββββ
See also {#see-also}
any
If combinator | {"source_file": "anyIf.md"} | [
-0.03332899883389473,
0.041535235941410065,
0.003260944038629532,
0.004756542854011059,
-0.047949645668268204,
0.05315535515546799,
0.10995006561279297,
0.08795724809169769,
0.04026157781481743,
0.040870025753974915,
0.08868981897830963,
-0.04797469452023506,
0.05777306109666824,
-0.090354... |
14019d1b-d885-45d1-9187-05c55e605994 | slug: '/examples/aggregate-function-combinators/maxMap'
title: 'maxMap'
description: 'Example of using the maxMap combinator'
keywords: ['max', 'map', 'combinator', 'examples', 'maxMap']
sidebar_label: 'maxMap'
doc_type: 'reference'
maxMap {#maxmap}
Description {#description}
The
Map
combinator can be applied to the
max
function to calculate the maximum value in a Map according to each key, using the
maxMap
aggregate combinator function.
Example usage {#example-usage}
In this example, we'll create a table that stores status codes and their counts for different timeslots,
where each row contains a Map of status codes to their corresponding counts. We'll use
maxMap
to find the maximum count for each status code within each timeslot.
```sql title="Query"
CREATE TABLE metrics(
date Date,
timeslot DateTime,
status Map(String, UInt64)
) ENGINE = Log;
INSERT INTO metrics VALUES
('2000-01-01', '2000-01-01 00:00:00', (['a', 'b', 'c'], [15, 25, 35])),
('2000-01-01', '2000-01-01 00:00:00', (['c', 'd', 'e'], [45, 55, 65])),
('2000-01-01', '2000-01-01 00:01:00', (['d', 'e', 'f'], [75, 85, 95])),
('2000-01-01', '2000-01-01 00:01:00', (['f', 'g', 'g'], [105, 115, 125]));
SELECT
timeslot,
maxMap(status),
FROM metrics
GROUP BY timeslot;
```
The
maxMap
function will find the maximum count for each status code within each timeslot. For example:
- In timeslot '2000-01-01 00:00:00':
- Status 'a': 15
- Status 'b': 25
- Status 'c': max(35, 45) = 45
- Status 'd': 55
- Status 'e': 65
- In timeslot '2000-01-01 00:01:00':
- Status 'd': 75
- Status 'e': 85
- Status 'f': max(95, 105) = 105
- Status 'g': max(115, 125) = 125
response title="Response"
βββββββββββββtimeslotββ¬βmaxMap(status)ββββββββββββββββββββββββ
1. β 2000-01-01 00:01:00 β {'d':75,'e':85,'f':105,'g':125} β
2. β 2000-01-01 00:00:00 β {'a':15,'b':25,'c':45,'d':55,'e':65} β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ
See also {#see-also}
max
Map combinator | {"source_file": "maxMap.md"} | [
0.02465267851948738,
0.00043322343844920397,
0.04901241138577461,
-0.006086159963160753,
-0.1177004724740982,
0.017256293445825577,
0.026120500639081,
0.08784354478120804,
-0.03507443889975548,
0.061198920011520386,
0.017538245767354965,
-0.02884361892938614,
0.08260896056890488,
-0.016100... |
2f63b445-007a-4acd-b53d-d11e92a3d592 | slug: '/examples/aggregate-function-combinators/quantilesTimingIf'
title: 'quantilesTimingIf'
description: 'Example of using the quantilesTimingIf combinator'
keywords: ['quantilesTiming', 'if', 'combinator', 'examples', 'quantilesTimingIf']
sidebar_label: 'quantilesTimingIf'
doc_type: 'reference'
quantilesTimingIf {#quantilestimingif}
Description {#description}
The
If
combinator can be applied to the
quantilesTiming
function to calculate quantiles of timing values for rows where the condition is true,
using the
quantilesTimingIf
aggregate combinator function.
Example usage {#example-usage}
In this example, we'll create a table that stores API response times for different endpoints,
and we'll use
quantilesTimingIf
to calculate response time quantiles for successful requests.
```sql title="Query"
CREATE TABLE api_responses(
endpoint String,
response_time_ms UInt32,
is_successful UInt8
) ENGINE = Log;
INSERT INTO api_responses VALUES
('orders', 82, 1),
('orders', 94, 1),
('orders', 98, 1),
('orders', 87, 1),
('orders', 103, 1),
('orders', 92, 1),
('orders', 89, 1),
('orders', 105, 1),
('products', 45, 1),
('products', 52, 1),
('products', 48, 1),
('products', 51, 1),
('products', 49, 1),
('products', 53, 1),
('products', 47, 1),
('products', 50, 1),
('users', 120, 0),
('users', 125, 0),
('users', 118, 0),
('users', 122, 0),
('users', 121, 0),
('users', 119, 0),
('users', 123, 0),
('users', 124, 0);
SELECT
endpoint,
quantilesTimingIf(0, 0.25, 0.5, 0.75, 0.95, 0.99, 1.0)(response_time_ms, is_successful = 1) as response_time_quantiles
FROM api_responses
GROUP BY endpoint;
```
The
quantilesTimingIf
function will calculate quantiles only for successful requests (is_successful = 1).
The returned array contains the following quantiles in order:
- 0 (minimum)
- 0.25 (first quartile)
- 0.5 (median)
- 0.75 (third quartile)
- 0.95 (95th percentile)
- 0.99 (99th percentile)
- 1.0 (maximum)
response title="Response"
ββendpointββ¬βresponse_time_quantilesββββββββββββββββββββββββββββββββββββββββββββββ
1. β orders β [82, 87, 92, 98, 103, 104, 105] β
2. β products β [45, 47, 49, 51, 52, 52, 53] β
3. β users β [nan, nan, nan, nan, nan, nan, nan] β
ββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See also {#see-also}
quantilesTiming
If combinator | {"source_file": "quantilesTimingIf.md"} | [
-0.06531300395727158,
0.05027293041348457,
0.002704873215407133,
0.04130786284804344,
-0.08377345651388168,
-0.024675047025084496,
0.029609955847263336,
0.05215649679303169,
0.033789072185754776,
-0.00693748751655221,
0.034893665462732315,
-0.11681226640939713,
0.01750745251774788,
-0.0228... |
12cc1d23-6b61-4858-846f-626bad6ebfff | slug: '/examples/aggregate-function-combinators/minSimpleState'
title: 'minSimpleState'
description: 'Example of using the minSimpleState combinator'
keywords: ['min', 'state', 'simple', 'combinator', 'examples', 'minSimpleState']
sidebar_label: 'minSimpleState'
doc_type: 'reference'
minSimpleState {#minsimplestate}
Description {#description}
The
SimpleState
combinator can be applied to the
min
function to return the minimum value across all input values. It returns the
result with type
SimpleAggregateFunction
.
Example usage {#example-usage}
Let's look at a practical example using a table that tracks daily temperature
readings. For each location, we want to maintain the lowest temperature recorded.
Using the
SimpleAggregateFunction
type with
min
automatically updates the
stored value when a lower temperature is encountered.
Create the source table for raw temperature readings:
sql
CREATE TABLE raw_temperature_readings
(
location_id UInt32,
location_name String,
temperature Int32,
recorded_at DateTime DEFAULT now()
)
ENGINE = MergeTree()
ORDER BY (location_id, recorded_at);
Create the aggregate table that will store the min temperatures:
sql
CREATE TABLE temperature_extremes
(
location_id UInt32,
location_name String,
min_temp SimpleAggregateFunction(min, Int32), -- Stores minimum temperature
max_temp SimpleAggregateFunction(max, Int32) -- Stores maximum temperature
)
ENGINE = AggregatingMergeTree()
ORDER BY location_id;
Create an Incremental materialized view that will act as an insert trigger
for inserted data and maintains the minimum, maximum temperatures per location.
sql
CREATE MATERIALIZED VIEW temperature_extremes_mv
TO temperature_extremes
AS SELECT
location_id,
location_name,
minSimpleState(temperature) AS min_temp, -- Using SimpleState combinator
maxSimpleState(temperature) AS max_temp -- Using SimpleState combinator
FROM raw_temperature_readings
GROUP BY location_id, location_name;
Insert some initial temperature readings:
sql
INSERT INTO raw_temperature_readings (location_id, location_name, temperature) VALUES
(1, 'North', 5),
(2, 'South', 15),
(3, 'West', 10),
(4, 'East', 8);
These readings are automatically processed by the materialized view. Let's check
the current state:
sql
SELECT
location_id,
location_name,
min_temp, -- Directly accessing the SimpleAggregateFunction values
max_temp -- No need for finalization function with SimpleAggregateFunction
FROM temperature_extremes
ORDER BY location_id;
response
ββlocation_idββ¬βlocation_nameββ¬βmin_tempββ¬βmax_tempββ
β 1 β North β 5 β 5 β
β 2 β South β 15 β 15 β
β 3 β West β 10 β 10 β
β 4 β East β 8 β 8 β
βββββββββββββββ΄ββββββββββββββββ΄βββββββββββ΄βββββββββββ
Insert some more data: | {"source_file": "minSimpleState.md"} | [
-0.01241270825266838,
0.03511328622698784,
0.041710738092660904,
0.11546140164136887,
-0.06821369379758835,
0.03552412614226341,
0.02876480482518673,
0.13258571922779083,
-0.02142353169620037,
0.01230809185653925,
0.020416459068655968,
-0.1468772292137146,
0.09353715181350708,
-0.026697061... |
8eb5c21c-0e11-462b-b672-046c860f30ae | Insert some more data:
sql
INSERT INTO raw_temperature_readings (location_id, location_name, temperature) VALUES
(1, 'North', 3),
(2, 'South', 18),
(3, 'West', 10),
(1, 'North', 8),
(4, 'East', 2);
View the updated extremes after new data:
sql
SELECT
location_id,
location_name,
min_temp,
max_temp
FROM temperature_extremes
ORDER BY location_id;
response
ββlocation_idββ¬βlocation_nameββ¬βmin_tempββ¬βmax_tempββ
β 1 β North β 3 β 8 β
β 1 β North β 5 β 5 β
β 2 β South β 18 β 18 β
β 2 β South β 15 β 15 β
β 3 β West β 10 β 10 β
β 3 β West β 10 β 10 β
β 4 β East β 2 β 2 β
β 4 β East β 8 β 8 β
βββββββββββββββ΄ββββββββββββββββ΄βββββββββββ΄βββββββββββ
Notice above that we have two inserted values for each location. This is because
parts have not yet been merged (and aggregated by
AggregatingMergeTree
). To get
the final result from the partial states we need to add a
GROUP BY
:
sql
SELECT
location_id,
location_name,
min(min_temp) AS min_temp, -- Aggregate across all parts
max(max_temp) AS max_temp -- Aggregate across all parts
FROM temperature_extremes
GROUP BY location_id, location_name
ORDER BY location_id;
We now get the expected result:
sql
ββlocation_idββ¬βlocation_nameββ¬βmin_tempββ¬βmax_tempββ
β 1 β North β 3 β 8 β
β 2 β South β 15 β 18 β
β 3 β West β 10 β 10 β
β 4 β East β 2 β 8 β
βββββββββββββββ΄ββββββββββββββββ΄βββββββββββ΄βββββββββββ
:::note
With
SimpleState
, you do not need to use the
Merge
combinator to combine
partial aggregation states.
:::
See also {#see-also}
min
SimpleState combinator
SimpleAggregateFunction type | {"source_file": "minSimpleState.md"} | [
0.03342537209391594,
-0.0398252010345459,
0.1046367958188057,
0.07482405006885529,
-0.023512164130806923,
-0.029481057077646255,
-0.016894232481718063,
-0.04541531205177307,
-0.019901514053344727,
0.023874707520008087,
0.07181524485349655,
-0.05741347372531891,
0.03491843864321709,
-0.0588... |
a80c1017-1fe4-4e1e-9399-04e9f1877c3c | slug: '/examples/aggregate-function-combinators/sumForEach'
title: 'sumForEach'
description: 'Example of using the sumForEach aggregate function'
keywords: ['sum', 'ForEach', 'combinator', 'examples', 'sumForEach']
sidebar_label: 'sumForEach'
doc_type: 'reference'
sumForEach {#sumforeach}
Description {#description}
The
ForEach
combinator
can be applied to the
sum
aggregate function to turn it from an aggregate
function which operates on row values to an aggregate function which operates on
array columns, applying the aggregate to each element in the array across rows.
Example usage {#example-usage}
For this example we'll make use of the
hits
dataset available in our
SQL playground
.
The
hits
table contains a column called
isMobile
of type UInt8 which can be
0
for Desktop or
1
for mobile:
sql runnable
SELECT EventTime, IsMobile FROM metrica.hits ORDER BY rand() LIMIT 10
We'll use the
sumForEach
aggregate combinator function to analyze how
desktop versus mobile traffic varies by hour of the day. Click the play button
below to run the query interactively:
sql runnable
SELECT
toHour(EventTime) AS hour_of_day,
-- Use sumForEach to count desktop and mobile visits in one pass
sumForEach([
IsMobile = 0, -- Desktop visits (IsMobile = 0)
IsMobile = 1 -- Mobile visits (IsMobile = 1)
]) AS device_counts
FROM metrica.hits
GROUP BY hour_of_day
ORDER BY hour_of_day;
See also {#see-also}
sum
ForEach
combinator | {"source_file": "sumForEach.md"} | [
0.005476975813508034,
-0.035665564239025116,
0.00901293195784092,
0.00852088164538145,
-0.0426008403301239,
0.0005184769397601485,
0.10728541016578674,
0.0030256998725235462,
-0.008653471246361732,
0.02484889328479767,
-0.023583097383379936,
-0.07376217097043991,
0.09755569696426392,
-0.02... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.