id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
505249dc-924e-4c2a-b841-6ce66858d244 | It means that
john
has the permission to execute:
SELECT x,y FROM db.table
.
SELECT x FROM db.table
.
SELECT y FROM db.table
.
john
can't execute
SELECT z FROM db.table
. The
SELECT * FROM db.table
also is not available. Processing this query, ClickHouse does not return any data, even
x
and
y
. The only exception is if a table contains only
x
and
y
columns. In this case ClickHouse returns all the data.
Also
john
has the
GRANT OPTION
privilege, so it can grant other users with privileges of the same or smaller scope.
Access to the
system
database is always allowed (since this database is used for processing queries).
:::note
While there are many system tables which new users can access by default, they may not be able to access every system table by default without grants.
Additionally, access to certain system tables such as
system.zookeeper
is restricted for Cloud users for security reasons.
:::
You can grant multiple privileges to multiple accounts in one query. The query
GRANT SELECT, INSERT ON *.* TO john, robin
allows accounts
john
and
robin
to execute the
INSERT
and
SELECT
queries over all the tables in all the databases on the server.
Wildcard grants {#wildcard-grants}
Specifying privileges you can use asterisk (
*
) instead of a table or a database name. For example, the
GRANT SELECT ON db.* TO john
query allows
john
to execute the
SELECT
query over all the tables in
db
database.
Also, you can omit database name. In this case privileges are granted for current database.
For example,
GRANT SELECT ON * TO john
grants the privilege on all the tables in the current database,
GRANT SELECT ON mytable TO john
grants the privilege on the
mytable
table in the current database.
:::note
The feature described below is available starting with the 24.10 ClickHouse version.
:::
You can also put asterisks at the end of a table or a database name. This feature allows you to grant privileges on an abstract prefix of the table's path.
Example:
GRANT SELECT ON db.my_tables* TO john
. This query allows
john
to execute the
SELECT
query over all the
db
database tables with the prefix
my_tables*
.
More examples:
GRANT SELECT ON db.my_tables* TO john
```sql
SELECT * FROM db.my_tables -- granted
SELECT * FROM db.my_tables_0 -- granted
SELECT * FROM db.my_tables_1 -- granted
SELECT * FROM db.other_table -- not_granted
SELECT * FROM db2.my_tables -- not_granted
```
GRANT SELECT ON db*.* TO john
sql
SELECT * FROM db.my_tables -- granted
SELECT * FROM db.my_tables_0 -- granted
SELECT * FROM db.my_tables_1 -- granted
SELECT * FROM db.other_table -- granted
SELECT * FROM db2.my_tables -- granted | {"source_file": "grant.md"} | [
-0.009192137978971004,
-0.010001803748309612,
-0.07600075751543045,
0.022235019132494926,
-0.005533975549042225,
-0.03354606777429581,
0.07161202281713486,
-0.02471598982810974,
-0.06997369974851608,
0.04279102012515068,
-0.006819763220846653,
0.04039907082915306,
0.044365812093019485,
-0.... |
7afbd9a6-d6f8-4114-b15e-afdedc6f1384 | All newly created tables within granted paths will automatically inherit all grants from their parents.
For example, if you run the
GRANT SELECT ON db.* TO john
query and then create a new table
db.new_table
, the user
john
will be able to run the
SELECT * FROM db.new_table
query.
You can specify asterisk
only
for the prefixes:
```sql
GRANT SELECT ON db.
TO john -- correct
GRANT SELECT ON db
.* TO john -- correct
GRANT SELECT ON
.my_table TO john -- wrong
GRANT SELECT ON foo
bar TO john -- wrong
GRANT SELECT ON
suffix TO john -- wrong
GRANT SELECT(foo) ON db.table
TO john -- wrong
```
Privileges {#privileges}
A Privilege is a permission given to a user to execute specific kinds of queries.
Privileges have a hierarchical structure and a set of permitted queries depends on the privilege scope.
The hierarchy of privileges in ClickHouse is shown below:
ALL
ACCESS MANAGEMENT
ALLOW SQL SECURITY NONE
ALTER QUOTA
ALTER ROLE
ALTER ROW POLICY
ALTER SETTINGS PROFILE
ALTER USER
CREATE QUOTA
CREATE ROLE
CREATE ROW POLICY
CREATE SETTINGS PROFILE
CREATE USER
DROP QUOTA
DROP ROLE
DROP ROW POLICY
DROP SETTINGS PROFILE
DROP USER
ROLE ADMIN
SHOW ACCESS
SHOW QUOTAS
SHOW ROLES
SHOW ROW POLICIES
SHOW SETTINGS PROFILES
SHOW USERS
ALTER
ALTER DATABASE
ALTER DATABASE SETTINGS
ALTER TABLE
ALTER COLUMN
ALTER ADD COLUMN
ALTER CLEAR COLUMN
ALTER COMMENT COLUMN
ALTER DROP COLUMN
ALTER MATERIALIZE COLUMN
ALTER MODIFY COLUMN
ALTER RENAME COLUMN
ALTER CONSTRAINT
ALTER ADD CONSTRAINT
ALTER DROP CONSTRAINT
ALTER DELETE
ALTER FETCH PARTITION
ALTER FREEZE PARTITION
ALTER INDEX
ALTER ADD INDEX
ALTER CLEAR INDEX
ALTER DROP INDEX
ALTER MATERIALIZE INDEX
ALTER ORDER BY
ALTER SAMPLE BY
ALTER MATERIALIZE TTL
ALTER MODIFY COMMENT
ALTER MOVE PARTITION
ALTER PROJECTION
ALTER SETTINGS
ALTER STATISTICS
ALTER ADD STATISTICS
ALTER DROP STATISTICS
ALTER MATERIALIZE STATISTICS
ALTER MODIFY STATISTICS
ALTER TTL
ALTER UPDATE
ALTER VIEW
ALTER VIEW MODIFY QUERY
ALTER VIEW REFRESH
ALTER VIEW MODIFY SQL SECURITY
BACKUP
CLUSTER
CREATE
CREATE ARBITRARY TEMPORARY TABLE
CREATE TEMPORARY TABLE
CREATE DATABASE
CREATE DICTIONARY
CREATE FUNCTION
CREATE RESOURCE
CREATE TABLE
CREATE VIEW
CREATE WORKLOAD
dictGet
displaySecretsInShowAndSelect
DROP
DROP DATABASE
DROP DICTIONARY
DROP FUNCTION
DROP RESOURCE
DROP TABLE
DROP VIEW
DROP WORKLOAD
INSERT
INTROSPECTION
addressToLine
addressToLineWithInlines
addressToSymbol
demangle
KILL QUERY
KILL TRANSACTION
MOVE PARTITION BETWEEN SHARDS
NAMED COLLECTION ADMIN
ALTER NAMED COLLECTION
CREATE NAMED COLLECTION
DROP NAMED COLLECTION
NAMED COLLECTION
SHOW NAMED COLLECTIONS
SHOW NAMED COLLECTIONS SECRETS
OPTIMIZE
SELECT
SET DEFINER
SHOW
SHOW COLUMNS
SHOW DATABASES | {"source_file": "grant.md"} | [
-0.020893055945634842,
-0.09811028838157654,
-0.04244847223162651,
0.03082234412431717,
-0.11974890530109406,
-0.019058210775256157,
0.0350935123860836,
-0.04152793809771538,
-0.05163400620222092,
0.03276533633470535,
0.022593824192881584,
0.012189783155918121,
0.08661265671253204,
0.01171... |
ab69b18a-538a-43a4-854b-e90897f0dbed | DROP NAMED COLLECTION
NAMED COLLECTION
SHOW NAMED COLLECTIONS
SHOW NAMED COLLECTIONS SECRETS
OPTIMIZE
SELECT
SET DEFINER
SHOW
SHOW COLUMNS
SHOW DATABASES
SHOW DICTIONARIES
SHOW TABLES
SHOW FILESYSTEM CACHES
SOURCES
AZURE
FILE
HDFS
HIVE
JDBC
KAFKA
MONGO
MYSQL
NATS
ODBC
POSTGRES
RABBITMQ
REDIS
REMOTE
S3
SQLITE
URL
SYSTEM
SYSTEM CLEANUP
SYSTEM DROP CACHE
SYSTEM DROP COMPILED EXPRESSION CACHE
SYSTEM DROP CONNECTIONS CACHE
SYSTEM DROP DISTRIBUTED CACHE
SYSTEM DROP DNS CACHE
SYSTEM DROP FILESYSTEM CACHE
SYSTEM DROP FORMAT SCHEMA CACHE
SYSTEM DROP MARK CACHE
SYSTEM DROP MMAP CACHE
SYSTEM DROP PAGE CACHE
SYSTEM DROP PRIMARY INDEX CACHE
SYSTEM DROP QUERY CACHE
SYSTEM DROP S3 CLIENT CACHE
SYSTEM DROP SCHEMA CACHE
SYSTEM DROP UNCOMPRESSED CACHE
SYSTEM DROP PRIMARY INDEX CACHE
SYSTEM DROP REPLICA
SYSTEM FAILPOINT
SYSTEM FETCHES
SYSTEM FLUSH
SYSTEM FLUSH ASYNC INSERT QUEUE
SYSTEM FLUSH LOGS
SYSTEM JEMALLOC
SYSTEM KILL QUERY
SYSTEM KILL TRANSACTION
SYSTEM LISTEN
SYSTEM LOAD PRIMARY KEY
SYSTEM MERGES
SYSTEM MOVES
SYSTEM PULLING REPLICATION LOG
SYSTEM REDUCE BLOCKING PARTS
SYSTEM REPLICATION QUEUES
SYSTEM REPLICA READINESS
SYSTEM RESTART DISK
SYSTEM RESTART REPLICA
SYSTEM RESTORE REPLICA
SYSTEM RELOAD
SYSTEM RELOAD ASYNCHRONOUS METRICS
SYSTEM RELOAD CONFIG
SYSTEM RELOAD DICTIONARY
SYSTEM RELOAD EMBEDDED DICTIONARIES
SYSTEM RELOAD FUNCTION
SYSTEM RELOAD MODEL
SYSTEM RELOAD USERS
SYSTEM SENDS
SYSTEM DISTRIBUTED SENDS
SYSTEM REPLICATED SENDS
SYSTEM SHUTDOWN
SYSTEM SYNC DATABASE REPLICA
SYSTEM SYNC FILE CACHE
SYSTEM SYNC FILESYSTEM CACHE
SYSTEM SYNC REPLICA
SYSTEM SYNC TRANSACTION LOG
SYSTEM THREAD FUZZER
SYSTEM TTL MERGES
SYSTEM UNFREEZE
SYSTEM UNLOAD PRIMARY KEY
SYSTEM VIEWS
SYSTEM VIRTUAL PARTS UPDATE
SYSTEM WAIT LOADING PARTS
TABLE ENGINE
TRUNCATE
UNDROP TABLE
NONE
Examples of how this hierarchy is treated:
The
ALTER
privilege includes all other
ALTER*
privileges.
ALTER CONSTRAINT
includes
ALTER ADD CONSTRAINT
and
ALTER DROP CONSTRAINT
privileges.
Privileges are applied at different levels. Knowing of a level suggests syntax available for privilege.
Levels (from lower to higher):
COLUMN
β Privilege can be granted for column, table, database, or globally.
TABLE
β Privilege can be granted for table, database, or globally.
VIEW
β Privilege can be granted for view, database, or globally.
DICTIONARY
β Privilege can be granted for dictionary, database, or globally.
DATABASE
β Privilege can be granted for database or globally.
GLOBAL
β Privilege can be granted only globally.
GROUP
β Groups privileges of different levels. When
GROUP
-level privilege is granted, only that privileges from the group are granted which correspond to the used syntax.
Examples of allowed syntax: | {"source_file": "grant.md"} | [
-0.00529550202190876,
-0.05607304349541664,
-0.0902220606803894,
0.019380562007427216,
-0.04412590339779854,
-0.02682117000222206,
-0.008025220595300198,
-0.06511916220188141,
0.025031931698322296,
0.12799854576587677,
-0.0012026351178064942,
0.006717959884554148,
0.029113834723830223,
-0.... |
71cdfdcb-a697-46da-ad01-784d34cfdb9b | Examples of allowed syntax:
GRANT SELECT(x) ON db.table TO user
GRANT SELECT ON db.* TO user
Examples of disallowed syntax:
GRANT CREATE USER(x) ON db.table TO user
GRANT CREATE USER ON db.* TO user
The special privilege
ALL
grants all the privileges to a user account or a role.
By default, a user account or a role has no privileges.
If a user or a role has no privileges, it is displayed as
NONE
privilege.
Some queries by their implementation require a set of privileges. For example, to execute the
RENAME
query you need the following privileges:
SELECT
,
CREATE TABLE
,
INSERT
and
DROP TABLE
.
SELECT {#select}
Allows executing
SELECT
queries.
Privilege level:
COLUMN
.
Description
User granted with this privilege can execute
SELECT
queries over a specified list of columns in the specified table and database. If user includes other columns then specified a query returns no data.
Consider the following privilege:
sql
GRANT SELECT(x,y) ON db.table TO john
This privilege allows
john
to execute any
SELECT
query that involves data from the
x
and/or
y
columns in
db.table
, for example,
SELECT x FROM db.table
.
john
can't execute
SELECT z FROM db.table
. The
SELECT * FROM db.table
also is not available. Processing this query, ClickHouse does not return any data, even
x
and
y
. The only exception is if a table contains only
x
and
y
columns, in this case ClickHouse returns all the data.
INSERT {#insert}
Allows executing
INSERT
queries.
Privilege level:
COLUMN
.
Description
User granted with this privilege can execute
INSERT
queries over a specified list of columns in the specified table and database. If user includes other columns then specified a query does not insert any data.
Example
sql
GRANT INSERT(x,y) ON db.table TO john
The granted privilege allows
john
to insert data to the
x
and/or
y
columns in
db.table
.
ALTER {#alter}
Allows executing
ALTER
queries according to the following hierarchy of privileges:
ALTER
. Level:
COLUMN
.
ALTER TABLE
. Level:
GROUP
ALTER UPDATE
. Level:
COLUMN
. Aliases:
UPDATE
ALTER DELETE
. Level:
COLUMN
. Aliases:
DELETE
ALTER COLUMN
. Level:
GROUP
ALTER ADD COLUMN
. Level:
COLUMN
. Aliases:
ADD COLUMN
ALTER DROP COLUMN
. Level:
COLUMN
. Aliases:
DROP COLUMN
ALTER MODIFY COLUMN
. Level:
COLUMN
. Aliases:
MODIFY COLUMN
ALTER COMMENT COLUMN
. Level:
COLUMN
. Aliases:
COMMENT COLUMN
ALTER CLEAR COLUMN
. Level:
COLUMN
. Aliases:
CLEAR COLUMN
ALTER RENAME COLUMN
. Level:
COLUMN
. Aliases:
RENAME COLUMN
ALTER INDEX
. Level:
GROUP
. Aliases:
INDEX
ALTER ORDER BY
. Level:
TABLE
. Aliases:
ALTER MODIFY ORDER BY
,
MODIFY ORDER BY
ALTER SAMPLE BY
. Level:
TABLE
. Aliases:
ALTER MODIFY SAMPLE BY
,
MODIFY SAMPLE BY
ALTER ADD INDEX
. Level:
TABLE
. Aliases:
ADD INDEX
ALTER DROP INDEX
. Level:
TABLE
. Aliases:
DROP INDEX | {"source_file": "grant.md"} | [
-0.0072371806018054485,
-0.03847765550017357,
-0.05397868528962135,
0.04157969728112221,
-0.12263654917478561,
-0.05327867344021797,
0.05915818735957146,
-0.009966817684471607,
-0.021808695048093796,
0.004650975111871958,
-0.021134521812200546,
-0.0008829215075820684,
0.09824562817811966,
... |
b929e5fe-9b11-4f78-b3fa-f2187ce2f581 | ALTER ADD INDEX
. Level:
TABLE
. Aliases:
ADD INDEX
ALTER DROP INDEX
. Level:
TABLE
. Aliases:
DROP INDEX
ALTER MATERIALIZE INDEX
. Level:
TABLE
. Aliases:
MATERIALIZE INDEX
ALTER CLEAR INDEX
. Level:
TABLE
. Aliases:
CLEAR INDEX
ALTER CONSTRAINT
. Level:
GROUP
. Aliases:
CONSTRAINT
ALTER ADD CONSTRAINT
. Level:
TABLE
. Aliases:
ADD CONSTRAINT
ALTER DROP CONSTRAINT
. Level:
TABLE
. Aliases:
DROP CONSTRAINT
ALTER TTL
. Level:
TABLE
. Aliases:
ALTER MODIFY TTL
,
MODIFY TTL
ALTER MATERIALIZE TTL
. Level:
TABLE
. Aliases:
MATERIALIZE TTL
ALTER SETTINGS
. Level:
TABLE
. Aliases:
ALTER SETTING
,
ALTER MODIFY SETTING
,
MODIFY SETTING
ALTER MOVE PARTITION
. Level:
TABLE
. Aliases:
ALTER MOVE PART
,
MOVE PARTITION
,
MOVE PART
ALTER FETCH PARTITION
. Level:
TABLE
. Aliases:
ALTER FETCH PART
,
FETCH PARTITION
,
FETCH PART
ALTER FREEZE PARTITION
. Level:
TABLE
. Aliases:
FREEZE PARTITION
ALTER VIEW
. Level:
GROUP
ALTER VIEW REFRESH
. Level:
VIEW
. Aliases:
REFRESH VIEW
ALTER VIEW MODIFY QUERY
. Level:
VIEW
. Aliases:
ALTER TABLE MODIFY QUERY
ALTER VIEW MODIFY SQL SECURITY
. Level:
VIEW
. Aliases:
ALTER TABLE MODIFY SQL SECURITY
Examples of how this hierarchy is treated:
The
ALTER
privilege includes all other
ALTER*
privileges.
ALTER CONSTRAINT
includes
ALTER ADD CONSTRAINT
and
ALTER DROP CONSTRAINT
privileges.
Notes
The
MODIFY SETTING
privilege allows modifying table engine settings. It does not affect settings or server configuration parameters.
The
ATTACH
operation needs the
CREATE
privilege.
The
DETACH
operation needs the
DROP
privilege.
To stop mutation by the
KILL MUTATION
query, you need to have a privilege to start this mutation. For example, if you want to stop the
ALTER UPDATE
query, you need the
ALTER UPDATE
,
ALTER TABLE
, or
ALTER
privilege.
BACKUP {#backup}
Allows execution of [
BACKUP
] in queries. For more information on backups see
"Backup and Restore"
.
CREATE {#create}
Allows executing
CREATE
and
ATTACH
DDL-queries according to the following hierarchy of privileges:
CREATE
. Level:
GROUP
CREATE DATABASE
. Level:
DATABASE
CREATE TABLE
. Level:
TABLE
CREATE ARBITRARY TEMPORARY TABLE
. Level:
GLOBAL
CREATE TEMPORARY TABLE
. Level:
GLOBAL
CREATE VIEW
. Level:
VIEW
CREATE DICTIONARY
. Level:
DICTIONARY
Notes
To delete the created table, a user needs
DROP
.
CLUSTER {#cluster}
Allows executing
ON CLUSTER
queries.
sql title="Syntax"
GRANT CLUSTER ON *.* TO <username>
By default, queries with
ON CLUSTER
require the user to have the
CLUSTER
grant.
You will get the following error if you to try to use
ON CLUSTER
in a query without first granting the
CLUSTER
privilege:
text
Not enough privileges. To execute this query, it's necessary to have the grant CLUSTER ON *.*. | {"source_file": "grant.md"} | [
-0.04441818967461586,
-0.02181733027100563,
0.008204953745007515,
0.05873880535364151,
0.01996602676808834,
0.032234616577625275,
0.060674190521240234,
-0.021570958197116852,
-0.023815229535102844,
0.04420599713921547,
0.018825707957148552,
-0.05686052143573761,
0.056471794843673706,
-0.00... |
92fcec86-bf70-4ce0-9f85-02cfac266eaf | text
Not enough privileges. To execute this query, it's necessary to have the grant CLUSTER ON *.*.
The default behavior can be changed by setting the
on_cluster_queries_require_cluster_grant
setting,
located in the
access_control_improvements
section of
config.xml
(see below), to
false
.
yaml title="config.xml"
<access_control_improvements>
<on_cluster_queries_require_cluster_grant>true</on_cluster_queries_require_cluster_grant>
</access_control_improvements>
DROP {#drop}
Allows executing
DROP
and
DETACH
queries according to the following hierarchy of privileges:
DROP
. Level:
GROUP
DROP DATABASE
. Level:
DATABASE
DROP TABLE
. Level:
TABLE
DROP VIEW
. Level:
VIEW
DROP DICTIONARY
. Level:
DICTIONARY
TRUNCATE {#truncate}
Allows executing
TRUNCATE
queries.
Privilege level:
TABLE
.
OPTIMIZE {#optimize}
Allows executing
OPTIMIZE TABLE
queries.
Privilege level:
TABLE
.
SHOW {#show}
Allows executing
SHOW
,
DESCRIBE
,
USE
, and
EXISTS
queries according to the following hierarchy of privileges:
SHOW
. Level:
GROUP
SHOW DATABASES
. Level:
DATABASE
. Allows to execute
SHOW DATABASES
,
SHOW CREATE DATABASE
,
USE <database>
queries.
SHOW TABLES
. Level:
TABLE
. Allows to execute
SHOW TABLES
,
EXISTS <table>
,
CHECK <table>
queries.
SHOW COLUMNS
. Level:
COLUMN
. Allows to execute
SHOW CREATE TABLE
,
DESCRIBE
queries.
SHOW DICTIONARIES
. Level:
DICTIONARY
. Allows to execute
SHOW DICTIONARIES
,
SHOW CREATE DICTIONARY
,
EXISTS <dictionary>
queries.
Notes
A user has the
SHOW
privilege if it has any other privilege concerning the specified table, dictionary or database.
KILL QUERY {#kill-query}
Allows executing
KILL
queries according to the following hierarchy of privileges:
Privilege level:
GLOBAL
.
Notes
KILL QUERY
privilege allows one user to kill queries of other users.
ACCESS MANAGEMENT {#access-management}
Allows a user to execute queries that manage users, roles and row policies.
ACCESS MANAGEMENT
. Level:
GROUP
CREATE USER
. Level:
GLOBAL
ALTER USER
. Level:
GLOBAL
DROP USER
. Level:
GLOBAL
CREATE ROLE
. Level:
GLOBAL
ALTER ROLE
. Level:
GLOBAL
DROP ROLE
. Level:
GLOBAL
ROLE ADMIN
. Level:
GLOBAL
CREATE ROW POLICY
. Level:
GLOBAL
. Aliases:
CREATE POLICY
ALTER ROW POLICY
. Level:
GLOBAL
. Aliases:
ALTER POLICY
DROP ROW POLICY
. Level:
GLOBAL
. Aliases:
DROP POLICY
CREATE QUOTA
. Level:
GLOBAL
ALTER QUOTA
. Level:
GLOBAL
DROP QUOTA
. Level:
GLOBAL
CREATE SETTINGS PROFILE
. Level:
GLOBAL
. Aliases:
CREATE PROFILE
ALTER SETTINGS PROFILE
. Level:
GLOBAL
. Aliases:
ALTER PROFILE
DROP SETTINGS PROFILE
. Level:
GLOBAL
. Aliases:
DROP PROFILE
SHOW ACCESS
. Level:
GROUP
SHOW_USERS
. Level:
GLOBAL
. Aliases:
SHOW CREATE USER
SHOW_ROLES
. Level:
GLOBAL
. Aliases:
SHOW CREATE ROLE | {"source_file": "grant.md"} | [
0.06271480768918991,
-0.02309269644320011,
-0.07719694823026657,
0.08261318504810333,
-0.048016421496868134,
-0.0342923142015934,
0.07788026332855225,
-0.034010883420705795,
-0.02668016031384468,
-0.0005951429484412074,
0.07534655928611755,
-0.03965242579579353,
0.03729413077235222,
-0.016... |
2d12982a-f10a-4513-8331-765453fb14e4 | SHOW ACCESS
. Level:
GROUP
SHOW_USERS
. Level:
GLOBAL
. Aliases:
SHOW CREATE USER
SHOW_ROLES
. Level:
GLOBAL
. Aliases:
SHOW CREATE ROLE
SHOW_ROW_POLICIES
. Level:
GLOBAL
. Aliases:
SHOW POLICIES
,
SHOW CREATE ROW POLICY
,
SHOW CREATE POLICY
SHOW_QUOTAS
. Level:
GLOBAL
. Aliases:
SHOW CREATE QUOTA
SHOW_SETTINGS_PROFILES
. Level:
GLOBAL
. Aliases:
SHOW PROFILES
,
SHOW CREATE SETTINGS PROFILE
,
SHOW CREATE PROFILE
ALLOW SQL SECURITY NONE
. Level:
GLOBAL
. Aliases:
CREATE SQL SECURITY NONE
,
SQL SECURITY NONE
,
SECURITY NONE
The
ROLE ADMIN
privilege allows a user to assign and revoke any roles including those which are not assigned to the user with the admin option.
SYSTEM {#system}
Allows a user to execute
SYSTEM
queries according to the following hierarchy of privileges.
SYSTEM
. Level:
GROUP
SYSTEM SHUTDOWN
. Level:
GLOBAL
. Aliases:
SYSTEM KILL
,
SHUTDOWN
SYSTEM DROP CACHE
. Aliases:
DROP CACHE
SYSTEM DROP DNS CACHE
. Level:
GLOBAL
. Aliases:
SYSTEM DROP DNS
,
DROP DNS CACHE
,
DROP DNS
SYSTEM DROP MARK CACHE
. Level:
GLOBAL
. Aliases:
SYSTEM DROP MARK
,
DROP MARK CACHE
,
DROP MARKS
SYSTEM DROP UNCOMPRESSED CACHE
. Level:
GLOBAL
. Aliases:
SYSTEM DROP UNCOMPRESSED
,
DROP UNCOMPRESSED CACHE
,
DROP UNCOMPRESSED
SYSTEM RELOAD
. Level:
GROUP
SYSTEM RELOAD CONFIG
. Level:
GLOBAL
. Aliases:
RELOAD CONFIG
SYSTEM RELOAD DICTIONARY
. Level:
GLOBAL
. Aliases:
SYSTEM RELOAD DICTIONARIES
,
RELOAD DICTIONARY
,
RELOAD DICTIONARIES
SYSTEM RELOAD EMBEDDED DICTIONARIES
. Level:
GLOBAL
. Aliases:
RELOAD EMBEDDED DICTIONARIES
SYSTEM MERGES
. Level:
TABLE
. Aliases:
SYSTEM STOP MERGES
,
SYSTEM START MERGES
,
STOP MERGES
,
START MERGES
SYSTEM TTL MERGES
. Level:
TABLE
. Aliases:
SYSTEM STOP TTL MERGES
,
SYSTEM START TTL MERGES
,
STOP TTL MERGES
,
START TTL MERGES
SYSTEM FETCHES
. Level:
TABLE
. Aliases:
SYSTEM STOP FETCHES
,
SYSTEM START FETCHES
,
STOP FETCHES
,
START FETCHES
SYSTEM MOVES
. Level:
TABLE
. Aliases:
SYSTEM STOP MOVES
,
SYSTEM START MOVES
,
STOP MOVES
,
START MOVES
SYSTEM SENDS
. Level:
GROUP
. Aliases:
SYSTEM STOP SENDS
,
SYSTEM START SENDS
,
STOP SENDS
,
START SENDS
SYSTEM DISTRIBUTED SENDS
. Level:
TABLE
. Aliases:
SYSTEM STOP DISTRIBUTED SENDS
,
SYSTEM START DISTRIBUTED SENDS
,
STOP DISTRIBUTED SENDS
,
START DISTRIBUTED SENDS
SYSTEM REPLICATED SENDS
. Level:
TABLE
. Aliases:
SYSTEM STOP REPLICATED SENDS
,
SYSTEM START REPLICATED SENDS
,
STOP REPLICATED SENDS
,
START REPLICATED SENDS
SYSTEM REPLICATION QUEUES
. Level:
TABLE
. Aliases:
SYSTEM STOP REPLICATION QUEUES
,
SYSTEM START REPLICATION QUEUES
,
STOP REPLICATION QUEUES
,
START REPLICATION QUEUES
SYSTEM SYNC REPLICA
. Level:
TABLE
. Aliases:
SYNC REPLICA
SYSTEM RESTART REPLICA
. Level:
TABLE
. Aliases:
RESTART REPLICA
SYSTEM FLUSH
. Level:
GROUP
SYSTEM FLUSH DISTRIBUTED
. Level:
TABLE
. Aliases:
FLUSH DISTRIBUTED | {"source_file": "grant.md"} | [
-0.0460330955684185,
0.016324231401085854,
-0.0541263222694397,
0.05528019368648529,
-0.06978050619363785,
-0.014180360361933708,
0.10247094929218292,
-0.013921472243964672,
-0.05746250972151756,
0.01037632580846548,
-0.017648499459028244,
-0.015733754262328148,
0.0988619402050972,
-0.0020... |
382cdabd-11d9-4292-a985-a5b3c9c736e3 | SYSTEM RESTART REPLICA
. Level:
TABLE
. Aliases:
RESTART REPLICA
SYSTEM FLUSH
. Level:
GROUP
SYSTEM FLUSH DISTRIBUTED
. Level:
TABLE
. Aliases:
FLUSH DISTRIBUTED
SYSTEM FLUSH LOGS
. Level:
GLOBAL
. Aliases:
FLUSH LOGS
The
SYSTEM RELOAD EMBEDDED DICTIONARIES
privilege implicitly granted by the
SYSTEM RELOAD DICTIONARY ON *.*
privilege.
INTROSPECTION {#introspection}
Allows using
introspection
functions.
INTROSPECTION
. Level:
GROUP
. Aliases:
INTROSPECTION FUNCTIONS
addressToLine
. Level:
GLOBAL
addressToLineWithInlines
. Level:
GLOBAL
addressToSymbol
. Level:
GLOBAL
demangle
. Level:
GLOBAL
SOURCES {#sources}
Allows using external data sources. Applies to
table engines
and
table functions
.
READ
. Level:
GLOBAL_WITH_PARAMETER
WRITE
. Level:
GLOBAL_WITH_PARAMETER
Possible parameters:
-
AZURE
-
FILE
-
HDFS
-
HIVE
-
JDBC
-
KAFKA
-
MONGO
-
MYSQL
-
NATS
-
ODBC
-
POSTGRES
-
RABBITMQ
-
REDIS
-
REMOTE
-
S3
-
SQLITE
-
URL
:::note
The separation on READ/WRITE grants for sources is available starting with version 25.7 and only with server setting
access_control_improvements.enable_read_write_grants
Otherwise, you should use the syntax
GRANT AZURE ON *.* TO user
which is equivalent to the new
GRANT READ, WRITE ON AZURE TO user
:::
Examples:
To create a table with the
MySQL table engine
, you need
CREATE TABLE (ON db.table_name)
and
MYSQL
privileges.
To use the
mysql table function
, you need
CREATE TEMPORARY TABLE
and
MYSQL
privileges.
Source Filter Grants {#source-filter-grants}
:::note
This feature is available starting with version 25.8 and only with server setting
access_control_improvements.enable_read_write_grants
:::
You can grant access to specific source URIs by using regular expression filters. This allows fine-grained control over which external data sources users can access.
Syntax:
sql
GRANT READ ON S3('regexp_pattern') TO user
This grant will allow the user to read only from S3 URIs that match the specified regular expression pattern.
Examples:
Grant access to specific S3 bucket paths:
```sql
-- Allow user to read only from s3://foo/ paths
GRANT READ ON S3('s3://foo/.*') TO john
-- Allow user to read from specific file patterns
GRANT READ ON S3('s3://mybucket/data/2024/.*.parquet') TO analyst
-- Multiple filters can be granted to the same user
GRANT READ ON S3('s3://foo/.
') TO john
GRANT READ ON S3('s3://bar/.
') TO john
```
:::warning
Source filter takes
regexp
as a parameter, so a grant
GRANT READ ON URL('http://www.google.com') TO john;
will allow queries
sql
SELECT * FROM url('https://www.google.com');
SELECT * FROM url('https://www-google.com');
because
.
is treated as an
Any Single Character
in the regexps.
This may lead to potential vulnerability. The correct grant should be
sql
GRANT READ ON URL('https://www\.google\.com') TO john;
::: | {"source_file": "grant.md"} | [
-0.02370450273156166,
-0.04877614229917526,
-0.06947679817676544,
0.023778118193149567,
-0.0008279957110062242,
-0.04845514893531799,
-0.012004966847598553,
-0.04840073361992836,
0.027084099128842354,
0.1437428891658783,
-0.02688036672770977,
0.03411336615681648,
0.017926765605807304,
-0.0... |
4479db40-5388-41d7-9b81-9f15fe104c31 | sql
GRANT READ ON URL('https://www\.google\.com') TO john;
:::
Re-granting with GRANT OPTION:
If the original grant has
WITH GRANT OPTION
, it can be re-granted using
GRANT CURRENT GRANTS
:
```sql
-- Original grant with GRANT OPTION
GRANT READ ON S3('s3://foo/.*') TO john WITH GRANT OPTION
-- John can now regrant this access to others
GRANT CURRENT GRANTS(READ ON S3) TO alice
```
Important limitations:
Partial revokes are not allowed:
You cannot revoke a subset of a granted filter pattern. You must revoke the entire grant and re-grant with new patterns if needed.
Wildcard grants are not allowed:
You cannot use
GRANT READ ON *('regexp')
or similar wildcard-only patterns. Specific source must be provided.
dictGet {#dictget}
dictGet
. Aliases:
dictHas
,
dictGetHierarchy
,
dictIsIn
Allows a user to execute
dictGet
,
dictHas
,
dictGetHierarchy
,
dictIsIn
functions.
Privilege level:
DICTIONARY
.
Examples
GRANT dictGet ON mydb.mydictionary TO john
GRANT dictGet ON mydictionary TO john
displaySecretsInShowAndSelect {#displaysecretsinshowandselect}
Allows a user to view secrets in
SHOW
and
SELECT
queries if both
display_secrets_in_show_and_select
server setting
and
format_display_secrets_in_show_and_select
format setting
are turned on.
NAMED COLLECTION ADMIN {#named-collection-admin}
Allows a certain operation on a specified named collection. Before version 23.7 it was called NAMED COLLECTION CONTROL, and after 23.7 NAMED COLLECTION ADMIN was added and NAMED COLLECTION CONTROL is preserved as an alias.
NAMED COLLECTION ADMIN
. Level:
NAMED_COLLECTION
. Aliases:
NAMED COLLECTION CONTROL
CREATE NAMED COLLECTION
. Level:
NAMED_COLLECTION
DROP NAMED COLLECTION
. Level:
NAMED_COLLECTION
ALTER NAMED COLLECTION
. Level:
NAMED_COLLECTION
SHOW NAMED COLLECTIONS
. Level:
NAMED_COLLECTION
. Aliases:
SHOW NAMED COLLECTIONS
SHOW NAMED COLLECTIONS SECRETS
. Level:
NAMED_COLLECTION
. Aliases:
SHOW NAMED COLLECTIONS SECRETS
NAMED COLLECTION
. Level:
NAMED_COLLECTION
. Aliases:
NAMED COLLECTION USAGE, USE NAMED COLLECTION
Unlike all other grants (CREATE, DROP, ALTER, SHOW) grant NAMED COLLECTION was added only in 23.7, while all others were added earlier - in 22.12.
Examples
Assuming a named collection is called abc, we grant privilege CREATE NAMED COLLECTION to user john.
-
GRANT CREATE NAMED COLLECTION ON abc TO john
TABLE ENGINE {#table-engine}
Allows using a specified table engine when creating a table. Applies to
table engines
.
Examples
GRANT TABLE ENGINE ON * TO john
GRANT TABLE ENGINE ON TinyLog TO john
ALL {#all}
Grants all the privileges on regulated entity to a user account or a role. | {"source_file": "grant.md"} | [
-0.07662719488143921,
0.03077680617570877,
0.03063674457371235,
-0.03651680797338486,
-0.057171549648046494,
-0.05471307039260864,
0.06779120117425919,
-0.10682522505521774,
-0.03991685062646866,
-0.015883849933743477,
-0.06036444008350372,
-0.0018261621007695794,
0.08419740945100784,
-0.0... |
2e251251-c4d4-450c-9ba1-ae1e988cdceb | Examples
GRANT TABLE ENGINE ON * TO john
GRANT TABLE ENGINE ON TinyLog TO john
ALL {#all}
Grants all the privileges on regulated entity to a user account or a role.
:::note
The privilege
ALL
is not supported in ClickHouse Cloud, where the
default
user has limited permissions. Users can grant the maximum permissions to a user by granting the
default_role
. See
here
for further details.
Users can also use the
GRANT CURRENT GRANTS
as the default user to achieve similar effects to
ALL
.
:::
NONE {#none}
Doesn't grant any privileges.
ADMIN OPTION {#admin-option}
The
ADMIN OPTION
privilege allows a user to grant their role to another user. | {"source_file": "grant.md"} | [
0.0336238257586956,
-0.009841847233474255,
-0.009126664139330387,
0.004796538036316633,
0.009145253337919712,
0.034418992698192596,
0.03450940549373627,
-0.04949764162302017,
-0.03657639026641846,
0.05862931162118912,
0.04019976407289505,
-0.0018149024108424783,
0.011876560747623444,
0.045... |
477b1aa9-cd96-47a4-995b-9ed3bd9fd662 | description: 'Documentation for Kill'
sidebar_label: 'KILL'
sidebar_position: 46
slug: /sql-reference/statements/kill
title: 'KILL Statements'
doc_type: 'reference'
There are two kinds of kill statements: to kill a query and to kill a mutation
KILL QUERY {#kill-query}
sql
KILL QUERY [ON CLUSTER cluster]
WHERE <where expression to SELECT FROM system.processes query>
[SYNC|ASYNC|TEST]
[FORMAT format]
Attempts to forcibly terminate the currently running queries.
The queries to terminate are selected from the system.processes table using the criteria defined in the
WHERE
clause of the
KILL
query.
Examples:
First, you'll need to get the list of incomplete queries. This SQL query provides them according to those running the longest:
List from a single ClickHouse node:
sql
SELECT
initial_query_id,
query_id,
formatReadableTimeDelta(elapsed) AS time_delta,
query,
*
FROM system.processes
WHERE query ILIKE 'SELECT%'
ORDER BY time_delta DESC;
List from a ClickHouse cluster:
sql
SELECT
initial_query_id,
query_id,
formatReadableTimeDelta(elapsed) AS time_delta,
query,
*
FROM clusterAllReplicas(default, system.processes)
WHERE query ILIKE 'SELECT%'
ORDER BY time_delta DESC;
Kill the query:
```sql
-- Forcibly terminates all queries with the specified query_id:
KILL QUERY WHERE query_id='2-857d-4a57-9ee0-327da5d60a90'
-- Synchronously terminates all queries run by 'username':
KILL QUERY WHERE user='username' SYNC
```
:::tip
If you are killing a query in ClickHouse Cloud or in a self-managed cluster, then be sure to use the
ON CLUSTER [cluster-name]
option, in order to ensure the query is killed on all replicas
:::
Read-only users can only stop their own queries.
By default, the asynchronous version of queries is used (
ASYNC
), which does not wait for confirmation that queries have stopped.
The synchronous version (
SYNC
) waits for all queries to stop and displays information about each process as it stops.
The response contains the
kill_status
column, which can take the following values:
finished
β The query was terminated successfully.
waiting
β Waiting for the query to end after sending it a signal to terminate.
The other values ββexplain why the query can't be stopped.
A test query (
TEST
) only checks the user's rights and displays a list of queries to stop.
KILL MUTATION {#kill-mutation}
The presence of long-running or incomplete mutations often indicates that a ClickHouse service is running poorly. The asynchronous nature of mutations can cause them to consume all available resources on a system. You may need to either:
Pause all new mutations,
INSERT
s , and
SELECT
s and allow the queue of mutations to complete.
Or manually kill some of these mutations by sending a
KILL
command.
sql
KILL MUTATION
WHERE <where expression to SELECT FROM system.mutations query>
[TEST]
[FORMAT format] | {"source_file": "kill.md"} | [
0.06222575157880783,
-0.00862391572445631,
-0.02778257057070732,
0.03110680729150772,
0.035143643617630005,
-0.04457666352391243,
0.029996097087860107,
-0.021663958206772804,
-0.04911819472908974,
0.04776734113693237,
0.025398673489689827,
0.010519691742956638,
0.035536181181669235,
-0.130... |
aa58e200-35ae-4742-b634-426fc085ecc3 | Or manually kill some of these mutations by sending a
KILL
command.
sql
KILL MUTATION
WHERE <where expression to SELECT FROM system.mutations query>
[TEST]
[FORMAT format]
Tries to cancel and remove
mutations
that are currently executing. Mutations to cancel are selected from the
system.mutations
table using the filter specified by the
WHERE
clause of the
KILL
query.
A test query (
TEST
) only checks the user's rights and displays a list of mutations to stop.
Examples:
Get a
count()
of the number of incomplete mutations:
Count of mutations from a single ClickHouse node:
sql
SELECT count(*)
FROM system.mutations
WHERE is_done = 0;
Count of mutations from a ClickHouse cluster of replicas:
sql
SELECT count(*)
FROM clusterAllReplicas('default', system.mutations)
WHERE is_done = 0;
Query the list of incomplete mutations:
List of mutations from a single ClickHouse node:
sql
SELECT mutation_id, *
FROM system.mutations
WHERE is_done = 0;
List of mutations from a ClickHouse cluster:
sql
SELECT mutation_id, *
FROM clusterAllReplicas('default', system.mutations)
WHERE is_done = 0;
Kill the mutations as needed:
```sql
-- Cancel and remove all mutations of the single table:
KILL MUTATION WHERE database = 'default' AND table = 'table'
-- Cancel the specific mutation:
KILL MUTATION WHERE database = 'default' AND table = 'table' AND mutation_id = 'mutation_3.txt'
```
The query is useful when a mutation is stuck and cannot finish (e.g.Β if some function in the mutation query throws an exception when applied to the data contained in the table).
Changes already made by the mutation are not rolled back.
:::note
is_killed=1
column (ClickHouse Cloud only) in the
system.mutations
table does not necessarily mean the mutation is completely finalized. It is possible for a mutation to remain in a state where
is_killed=1
and
is_done=0
for an extended period. This can happen if another long-running mutation is blocking the killed mutation. This is a normal situation.
::: | {"source_file": "kill.md"} | [
0.04627000540494919,
0.0061942473985254765,
-0.06814470142126083,
0.005633241031318903,
0.018336089327931404,
-0.10094019770622253,
0.04641922935843468,
-0.032921984791755676,
-0.01746019348502159,
0.05174032971262932,
0.03586976230144501,
-0.018670249730348587,
0.10403421521186829,
-0.060... |
e74f7801-6525-43b6-9cd8-3e49a7a2c819 | description: 'Documentation for Explain'
sidebar_label: 'EXPLAIN'
sidebar_position: 39
slug: /sql-reference/statements/explain
title: 'EXPLAIN Statement'
doc_type: 'reference'
Shows the execution plan of a statement.
Syntax:
sql
EXPLAIN [AST | SYNTAX | QUERY TREE | PLAN | PIPELINE | ESTIMATE | TABLE OVERRIDE] [setting = value, ...]
[
SELECT ... |
tableFunction(...) [COLUMNS (...)] [ORDER BY ...] [PARTITION BY ...] [PRIMARY KEY] [SAMPLE BY ...] [TTL ...]
]
[FORMAT ...]
Example:
sql
EXPLAIN SELECT sum(number) FROM numbers(10) UNION ALL SELECT sum(number) FROM numbers(10) ORDER BY sum(number) ASC FORMAT TSV;
sql
Union
Expression (Projection)
Expression (Before ORDER BY and SELECT)
Aggregating
Expression (Before GROUP BY)
SettingQuotaAndLimits (Set limits and quota after reading from storage)
ReadFromStorage (SystemNumbers)
Expression (Projection)
MergingSorted (Merge sorted streams for ORDER BY)
MergeSorting (Merge sorted blocks for ORDER BY)
PartialSorting (Sort each block for ORDER BY)
Expression (Before ORDER BY and SELECT)
Aggregating
Expression (Before GROUP BY)
SettingQuotaAndLimits (Set limits and quota after reading from storage)
ReadFromStorage (SystemNumbers)
EXPLAIN Types {#explain-types}
AST
β Abstract syntax tree.
SYNTAX
β Query text after AST-level optimizations.
QUERY TREE
β Query tree after Query Tree level optimizations.
PLAN
β Query execution plan.
PIPELINE
β Query execution pipeline.
EXPLAIN AST {#explain-ast}
Dump query AST. Supports all types of queries, not only
SELECT
.
Examples:
sql
EXPLAIN AST SELECT 1;
sql
SelectWithUnionQuery (children 1)
ExpressionList (children 1)
SelectQuery (children 1)
ExpressionList (children 1)
Literal UInt64_1
sql
EXPLAIN AST ALTER TABLE t1 DELETE WHERE date = today();
sql
explain
AlterQuery t1 (children 1)
ExpressionList (children 1)
AlterCommand 27 (children 1)
Function equals (children 1)
ExpressionList (children 2)
Identifier date
Function today (children 1)
ExpressionList
EXPLAIN SYNTAX {#explain-syntax}
Shows the Abstract Syntax Tree (AST) of a query after syntax analysis.
It's done by parsing the query, constructing query AST and query tree, optionally running query analyzer and optimization passes, and then converting the query tree back to the query AST.
Settings:
oneline
β Print the query in one line. Default:
0
.
run_query_tree_passes
β Run query tree passes before dumping the query tree. Default:
0
.
query_tree_passes
β If
run_query_tree_passes
is set, specifies how many passes to run. Without specifying
query_tree_passes
it runs all the passes.
Examples: | {"source_file": "explain.md"} | [
-0.012866156175732613,
0.04895195737481117,
0.015162291005253792,
0.0462118424475193,
-0.042043957859277725,
0.020261626690626144,
0.05454413965344429,
0.11567455530166626,
-0.04313143342733383,
0.07328849285840988,
-0.016253184527158737,
0.006461312994360924,
0.050250474363565445,
-0.0740... |
17223019-a411-44b9-8d78-265d3e82b65f | query_tree_passes
β If
run_query_tree_passes
is set, specifies how many passes to run. Without specifying
query_tree_passes
it runs all the passes.
Examples:
sql
EXPLAIN SYNTAX SELECT * FROM system.numbers AS a, system.numbers AS b, system.numbers AS c WHERE a.number = b.number AND b.number = c.number;
Output:
sql
SELECT *
FROM system.numbers AS a, system.numbers AS b, system.numbers AS c
WHERE (a.number = b.number) AND (b.number = c.number)
With
run_query_tree_passes
:
sql
EXPLAIN SYNTAX run_query_tree_passes = 1 SELECT * FROM system.numbers AS a, system.numbers AS b, system.numbers AS c WHERE a.number = b.number AND b.number = c.number;
Output:
sql
SELECT
__table1.number AS `a.number`,
__table2.number AS `b.number`,
__table3.number AS `c.number`
FROM system.numbers AS __table1
ALL INNER JOIN system.numbers AS __table2 ON __table1.number = __table2.number
ALL INNER JOIN system.numbers AS __table3 ON __table2.number = __table3.number
EXPLAIN QUERY TREE {#explain-query-tree}
Settings:
run_passes
β Run all query tree passes before dumping the query tree. Default:
1
.
dump_passes
β Dump information about used passes before dumping the query tree. Default:
0
.
passes
β Specifies how many passes to run. If set to
-1
, runs all the passes. Default:
-1
.
dump_tree
β Display the query tree. Default:
1
.
dump_ast
β Display the query AST generated from the query tree. Default:
0
.
Example:
sql
EXPLAIN QUERY TREE SELECT id, value FROM test_table;
sql
QUERY id: 0
PROJECTION COLUMNS
id UInt64
value String
PROJECTION
LIST id: 1, nodes: 2
COLUMN id: 2, column_name: id, result_type: UInt64, source_id: 3
COLUMN id: 4, column_name: value, result_type: String, source_id: 3
JOIN TREE
TABLE id: 3, table_name: default.test_table
EXPLAIN PLAN {#explain-plan}
Dump query plan steps.
Settings:
header
β Prints output header for step. Default: 0.
description
β Prints step description. Default: 1.
indexes
β Shows used indexes, the number of filtered parts and the number of filtered granules for every index applied. Default: 0. Supported for
MergeTree
tables. Starting from ClickHouse >= v25.9, this statement only shows reasonable output when used with
SETTINGS use_query_condition_cache = 0, use_skip_indexes_on_data_read = 0
.
projections
β Shows all analyzed projections and their effect on part-level filtering based on projection primary key conditions. For each projection, this section includes statistics such as the number of parts, rows, marks, and ranges that were evaluated using the projection's primary key. It also shows how many data parts were skipped due to this filtering, without reading from the projection itself. Whether a projection was actually used for reading or only analyzed for filtering can be determined by the
description
field. Default: 0. Supported for
MergeTree
tables. | {"source_file": "explain.md"} | [
0.05654782056808472,
-0.0038848614785820246,
-0.021279212087392807,
0.035848468542099,
-0.03790760040283203,
-0.07496568560600281,
0.06350033730268478,
0.057954512536525726,
-0.023628920316696167,
0.0024864168372005224,
-0.09419189393520355,
0.006950057111680508,
0.033940088003873825,
-0.1... |
7094a2af-0b83-4ddd-84b7-85414df343dd | actions
β Prints detailed information about step actions. Default: 0.
json
β Prints query plan steps as a row in
JSON
format. Default: 0. It is recommended to use
TabSeparatedRaw (TSVRaw)
format to avoid unnecessary escaping.
input_headers
- Prints input headers for step. Default: 0. Mostly useful only for developers to debug issues related to input-output header mismatch.
When
json=1
step names will contain an additional suffix with unique step identifier.
Example:
sql
EXPLAIN SELECT sum(number) FROM numbers(10) GROUP BY number % 4;
sql
Union
Expression (Projection)
Expression (Before ORDER BY and SELECT)
Aggregating
Expression (Before GROUP BY)
SettingQuotaAndLimits (Set limits and quota after reading from storage)
ReadFromStorage (SystemNumbers)
:::note
Step and query cost estimation is not supported.
:::
When
json = 1
, the query plan is represented in JSON format. Every node is a dictionary that always has the keys
Node Type
and
Plans
.
Node Type
is a string with a step name.
Plans
is an array with child step descriptions. Other optional keys may be added depending on node type and settings.
Example:
sql
EXPLAIN json = 1, description = 0 SELECT 1 UNION ALL SELECT 2 FORMAT TSVRaw;
json
[
{
"Plan": {
"Node Type": "Union",
"Node Id": "Union_10",
"Plans": [
{
"Node Type": "Expression",
"Node Id": "Expression_13",
"Plans": [
{
"Node Type": "ReadFromStorage",
"Node Id": "ReadFromStorage_0"
}
]
},
{
"Node Type": "Expression",
"Node Id": "Expression_16",
"Plans": [
{
"Node Type": "ReadFromStorage",
"Node Id": "ReadFromStorage_4"
}
]
}
]
}
}
]
With
description
= 1, the
Description
key is added to the step:
json
{
"Node Type": "ReadFromStorage",
"Description": "SystemOne"
}
With
header
= 1, the
Header
key is added to the step as an array of columns.
Example:
sql
EXPLAIN json = 1, description = 0, header = 1 SELECT 1, 2 + dummy;
json
[
{
"Plan": {
"Node Type": "Expression",
"Node Id": "Expression_5",
"Header": [
{
"Name": "1",
"Type": "UInt8"
},
{
"Name": "plus(2, dummy)",
"Type": "UInt16"
}
],
"Plans": [
{
"Node Type": "ReadFromStorage",
"Node Id": "ReadFromStorage_0",
"Header": [
{
"Name": "dummy",
"Type": "UInt8"
}
]
}
]
}
}
]
With
indexes
= 1, the
Indexes
key is added. It contains an array of used indexes. Each index is described as JSON with
Type
key (a string
MinMax
,
Partition
,
PrimaryKey
or
Skip
) and optional keys: | {"source_file": "explain.md"} | [
-0.0959932878613472,
0.07214289158582687,
-0.056037358939647675,
0.06228606775403023,
-0.0395052395761013,
-0.01342024002224207,
0.007409712299704552,
0.07408642023801804,
-0.023360328748822212,
0.04629919305443764,
-0.01964307390153408,
-0.010930468328297138,
0.05763985216617584,
-0.04065... |
9b2f2134-e35b-4849-9515-029df698d851 | Name
β The index name (currently only used for
Skip
indexes).
Keys
β The array of columns used by the index.
Condition
β The used condition.
Description
β The index description (currently only used for
Skip
indexes).
Parts
β The number of parts after/before the index is applied.
Granules
β The number of granules after/before the index is applied.
Ranges
β The number of granules ranges after the index is applied.
Example:
json
"Node Type": "ReadFromMergeTree",
"Indexes": [
{
"Type": "MinMax",
"Keys": ["y"],
"Condition": "(y in [1, +inf))",
"Parts": 4/5,
"Granules": 11/12
},
{
"Type": "Partition",
"Keys": ["y", "bitAnd(z, 3)"],
"Condition": "and((bitAnd(z, 3) not in [1, 1]), and((y in [1, +inf)), (bitAnd(z, 3) not in [1, 1])))",
"Parts": 3/4,
"Granules": 10/11
},
{
"Type": "PrimaryKey",
"Keys": ["x", "y"],
"Condition": "and((x in [11, +inf)), (y in [1, +inf)))",
"Parts": 2/3,
"Granules": 6/10,
"Search Algorithm": "generic exclusion search"
},
{
"Type": "Skip",
"Name": "t_minmax",
"Description": "minmax GRANULARITY 2",
"Parts": 1/2,
"Granules": 2/6
},
{
"Type": "Skip",
"Name": "t_set",
"Description": "set GRANULARITY 2",
"": 1/1,
"Granules": 1/2
}
]
With
projections
= 1, the
Projections
key is added. It contains an array of analyzed projections. Each projection is described as JSON with following keys:
Name
β The projection name.
Condition
β The used projection primary key condition.
Description
β The description of how the projection is used (e.g. part-level filtering).
Selected Parts
β Number of parts selected by the projection.
Selected Marks
β Number of marks selected.
Selected Ranges
β Number of ranges selected.
Selected Rows
β Number of rows selected.
Filtered Parts
β Number of parts skipped due to part-level filtering.
Example:
json
"Node Type": "ReadFromMergeTree",
"Projections": [
{
"Name": "region_proj",
"Description": "Projection has been analyzed and is used for part-level filtering",
"Condition": "(region in ['us_west', 'us_west'])",
"Search Algorithm": "binary search",
"Selected Parts": 3,
"Selected Marks": 3,
"Selected Ranges": 3,
"Selected Rows": 3,
"Filtered Parts": 2
},
{
"Name": "user_id_proj",
"Description": "Projection has been analyzed and is used for part-level filtering",
"Condition": "(user_id in [107, 107])",
"Search Algorithm": "binary search",
"Selected Parts": 1,
"Selected Marks": 1,
"Selected Ranges": 1,
"Selected Rows": 1,
"Filtered Parts": 2
}
]
With
actions
= 1, added keys depend on step type.
Example:
sql
EXPLAIN json = 1, actions = 1, description = 0 SELECT 1 FORMAT TSVRaw; | {"source_file": "explain.md"} | [
-0.00899919867515564,
0.05980311334133148,
-0.0007490521529689431,
0.019633371382951736,
0.012991568073630333,
-0.04026847705245018,
0.01769479177892208,
0.011521447449922562,
0.021633394062519073,
-0.048174940049648285,
-0.03671388700604439,
0.014713590033352375,
0.05993443354964256,
-0.0... |
69e613d1-35c8-454e-b822-c6398ce52ca0 | With
actions
= 1, added keys depend on step type.
Example:
sql
EXPLAIN json = 1, actions = 1, description = 0 SELECT 1 FORMAT TSVRaw;
json
[
{
"Plan": {
"Node Type": "Expression",
"Node Id": "Expression_5",
"Expression": {
"Inputs": [
{
"Name": "dummy",
"Type": "UInt8"
}
],
"Actions": [
{
"Node Type": "INPUT",
"Result Type": "UInt8",
"Result Name": "dummy",
"Arguments": [0],
"Removed Arguments": [0],
"Result": 0
},
{
"Node Type": "COLUMN",
"Result Type": "UInt8",
"Result Name": "1",
"Column": "Const(UInt8)",
"Arguments": [],
"Removed Arguments": [],
"Result": 1
}
],
"Outputs": [
{
"Name": "1",
"Type": "UInt8"
}
],
"Positions": [1]
},
"Plans": [
{
"Node Type": "ReadFromStorage",
"Node Id": "ReadFromStorage_0"
}
]
}
}
]
EXPLAIN PIPELINE {#explain-pipeline}
Settings:
header
β Prints header for each output port. Default: 0.
graph
β Prints a graph described in the
DOT
graph description language. Default: 0.
compact
β Prints graph in compact mode if
graph
setting is enabled. Default: 1.
When
compact=0
and
graph=1
processor names will contain an additional suffix with unique processor identifier.
Example:
sql
EXPLAIN PIPELINE SELECT sum(number) FROM numbers_mt(100000) GROUP BY number % 4;
sql
(Union)
(Expression)
ExpressionTransform
(Expression)
ExpressionTransform
(Aggregating)
Resize 2 β 1
AggregatingTransform Γ 2
(Expression)
ExpressionTransform Γ 2
(SettingQuotaAndLimits)
(ReadFromStorage)
NumbersRange Γ 2 0 β 1
EXPLAIN ESTIMATE {#explain-estimate}
Shows the estimated number of rows, marks and parts to be read from the tables while processing the query. Works with tables in the
MergeTree
family.
Example
Creating a table:
sql
CREATE TABLE ttt (i Int64) ENGINE = MergeTree() ORDER BY i SETTINGS index_granularity = 16, write_final_mark = 0;
INSERT INTO ttt SELECT number FROM numbers(128);
OPTIMIZE TABLE ttt;
Query:
sql
EXPLAIN ESTIMATE SELECT * FROM ttt;
Result:
text
ββdatabaseββ¬βtableββ¬βpartsββ¬βrowsββ¬βmarksββ
β default β ttt β 1 β 128 β 8 β
ββββββββββββ΄ββββββββ΄ββββββββ΄βββββββ΄ββββββββ
EXPLAIN TABLE OVERRIDE {#explain-table-override}
Shows the result of a table override on a table schema accessed through a table function.
Also does some validation, throwing an exception if the override would have caused some kind of failure.
Example
Assume you have a remote MySQL table like this:
sql
CREATE TABLE db.tbl (
id INT PRIMARY KEY,
created DATETIME DEFAULT now()
) | {"source_file": "explain.md"} | [
-0.06158534064888954,
0.05271788686513901,
-0.044209592044353485,
0.056813858449459076,
-0.0475844107568264,
0.035072699189186096,
0.007538842968642712,
0.058579206466674805,
0.004036032594740391,
0.051105618476867676,
-0.037540219724178314,
-0.00620108051225543,
-0.00750677241012454,
0.01... |
bce55b17-a861-485b-b2dd-7f8356375a78 | Example
Assume you have a remote MySQL table like this:
sql
CREATE TABLE db.tbl (
id INT PRIMARY KEY,
created DATETIME DEFAULT now()
)
sql
EXPLAIN TABLE OVERRIDE mysql('127.0.0.1:3306', 'db', 'tbl', 'root', 'clickhouse')
PARTITION BY toYYYYMM(assumeNotNull(created))
Result:
text
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββ
β PARTITION BY uses columns: `created` Nullable(DateTime) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
:::note
The validation is not complete, so a successful query does not guarantee that the override would not cause issues.
::: | {"source_file": "explain.md"} | [
0.005914496723562479,
-0.025842811912298203,
0.06266205757856369,
0.037556856870651245,
-0.01615128293633461,
-0.05870337784290314,
-0.014565784484148026,
0.05269669368863106,
-0.01381386537104845,
0.053379688411951065,
0.05354761332273483,
-0.006996442563831806,
0.10795818269252777,
0.003... |
7e0eaf6a-3f51-4106-aba8-2066d0837ca7 | description: 'Documentation for Describe Table'
sidebar_label: 'DESCRIBE TABLE'
sidebar_position: 42
slug: /sql-reference/statements/describe-table
title: 'DESCRIBE TABLE'
doc_type: 'reference'
Returns information about table columns.
Syntax
sql
DESC|DESCRIBE TABLE [db.]table [INTO OUTFILE filename] [FORMAT format]
The
DESCRIBE
statement returns a row for each table column with the following
String
values:
name
β A column name.
type
β A column type.
default_type
β A clause that is used in the column
default expression
:
DEFAULT
,
MATERIALIZED
or
ALIAS
. If there is no default expression, then empty string is returned.
default_expression
β An expression specified after the
DEFAULT
clause.
comment
β A
column comment
.
codec_expression
β A
codec
that is applied to the column.
ttl_expression
β A
TTL
expression.
is_subcolumn
β A flag that equals
1
for internal subcolumns. It is included into the result only if subcolumn description is enabled by the
describe_include_subcolumns
setting.
All columns in
Nested
data structures are described separately. The name of each column is prefixed with a parent column name and a dot.
To show internal subcolumns of other data types, use the
describe_include_subcolumns
setting.
Example
Query:
```sql
CREATE TABLE describe_example (
id UInt64, text String DEFAULT 'unknown' CODEC(ZSTD),
user Tuple (name String, age UInt8)
) ENGINE = MergeTree() ORDER BY id;
DESCRIBE TABLE describe_example;
DESCRIBE TABLE describe_example SETTINGS describe_include_subcolumns=1;
```
Result:
text
ββnameββ¬βtypeβββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β id β UInt64 β β β β β β
β text β String β DEFAULT β 'unknown' β β ZSTD(1) β β
β user β Tuple(name String, age UInt8) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
The second query additionally shows subcolumns: | {"source_file": "describe-table.md"} | [
-0.016906121745705605,
0.007312793750315905,
-0.05255288630723953,
0.08547594398260117,
-0.006611478049308062,
-0.04599074646830559,
0.0054717352613806725,
0.08740326762199402,
-0.008506487123668194,
0.03489984571933746,
0.00471761217340827,
-0.05604711174964905,
0.024512501433491707,
-0.0... |
c9aa1913-2e65-4d9c-a95f-3043f82b70f8 | The second query additionally shows subcolumns:
text
ββnameβββββββ¬βtypeβββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ¬βis_subcolumnββ
β id β UInt64 β β β β β β 0 β
β text β String β DEFAULT β 'unknown' β β ZSTD(1) β β 0 β
β user β Tuple(name String, age UInt8) β β β β β β 0 β
β user.name β String β β β β β β 1 β
β user.age β UInt8 β β β β β β 1 β
βββββββββββββ΄ββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββ
See Also
describe_include_subcolumns
setting. | {"source_file": "describe-table.md"} | [
0.023473074659705162,
0.013474094681441784,
-0.03207416832447052,
0.0736331045627594,
0.021978622302412987,
-0.008573448285460472,
-0.020842915400862694,
0.04070199280977249,
-0.03491174057126045,
-0.04482465237379074,
0.05792883783578873,
-0.03591499850153923,
0.06890426576137543,
-0.0898... |
d6374d76-6552-45cf-b6d3-5d3d1897480d | description: 'Documentation for DROP Statements'
sidebar_label: 'DROP'
sidebar_position: 44
slug: /sql-reference/statements/drop
title: 'DROP Statements'
doc_type: 'reference'
DROP Statements
Deletes existing entity. If the
IF EXISTS
clause is specified, these queries do not return an error if the entity does not exist. If the
SYNC
modifier is specified, the entity is dropped without delay.
DROP DATABASE {#drop-database}
Deletes all tables inside the
db
database, then deletes the
db
database itself.
Syntax:
sql
DROP DATABASE [IF EXISTS] db [ON CLUSTER cluster] [SYNC]
DROP TABLE {#drop-table}
Deletes one or more tables.
:::tip
To undo the deletion of a table, please see
UNDROP TABLE
:::
Syntax:
sql
DROP [TEMPORARY] TABLE [IF EXISTS] [IF EMPTY] [db1.]name_1[, [db2.]name_2, ...] [ON CLUSTER cluster] [SYNC]
Limitations:
- If the clause
IF EMPTY
is specified, the server checks the emptiness of the table only on the replica which received the query.
- Deleting multiple tables at once is not an atomic operation, i.e. if the deletion of a table fails, subsequent tables will not be deleted.
DROP DICTIONARY {#drop-dictionary}
Deletes the dictionary.
Syntax:
sql
DROP DICTIONARY [IF EXISTS] [db.]name [SYNC]
DROP USER {#drop-user}
Deletes a user.
Syntax:
sql
DROP USER [IF EXISTS] name [,...] [ON CLUSTER cluster_name] [FROM access_storage_type]
DROP ROLE {#drop-role}
Deletes a role. The deleted role is revoked from all the entities where it was assigned.
Syntax:
sql
DROP ROLE [IF EXISTS] name [,...] [ON CLUSTER cluster_name] [FROM access_storage_type]
DROP ROW POLICY {#drop-row-policy}
Deletes a row policy. Deleted row policy is revoked from all the entities where it was assigned.
Syntax:
sql
DROP [ROW] POLICY [IF EXISTS] name [,...] ON [database.]table [,...] [ON CLUSTER cluster_name] [FROM access_storage_type]
DROP QUOTA {#drop-quota}
Deletes a quota. The deleted quota is revoked from all the entities where it was assigned.
Syntax:
sql
DROP QUOTA [IF EXISTS] name [,...] [ON CLUSTER cluster_name] [FROM access_storage_type]
DROP SETTINGS PROFILE {#drop-settings-profile}
Deletes a settings profile. The deleted settings profile is revoked from all the entities where it was assigned.
Syntax:
sql
DROP [SETTINGS] PROFILE [IF EXISTS] name [,...] [ON CLUSTER cluster_name] [FROM access_storage_type]
DROP VIEW {#drop-view}
Deletes a view. Views can be deleted by a
DROP TABLE
command as well but
DROP VIEW
checks that
[db.]name
is a view.
Syntax:
sql
DROP VIEW [IF EXISTS] [db.]name [ON CLUSTER cluster] [SYNC]
DROP FUNCTION {#drop-function}
Deletes a user defined function created by
CREATE FUNCTION
.
System functions can not be dropped.
Syntax
sql
DROP FUNCTION [IF EXISTS] function_name [on CLUSTER cluster]
Example
sql
CREATE FUNCTION linear_equation AS (x, k, b) -> k*x + b;
DROP FUNCTION linear_equation;
DROP NAMED COLLECTION {#drop-named-collection} | {"source_file": "drop.md"} | [
-0.028229398652911186,
-0.046459682285785675,
0.020329803228378296,
0.07117266207933426,
0.07351266592741013,
-0.041188061237335205,
0.05859668180346489,
-0.07331039756536484,
0.04086887091398239,
0.027038006111979485,
0.09884141385555267,
-0.005566689185798168,
0.10472792387008667,
-0.074... |
b6a02f33-662c-4a20-9050-25ea64b8c592 | Example
sql
CREATE FUNCTION linear_equation AS (x, k, b) -> k*x + b;
DROP FUNCTION linear_equation;
DROP NAMED COLLECTION {#drop-named-collection}
Deletes a named collection.
Syntax
sql
DROP NAMED COLLECTION [IF EXISTS] name [on CLUSTER cluster]
Example
sql
CREATE NAMED COLLECTION foobar AS a = '1', b = '2';
DROP NAMED COLLECTION foobar; | {"source_file": "drop.md"} | [
-0.019045017659664154,
-0.005469586234539747,
-0.05562514811754227,
0.06420280784368515,
-0.0804433673620224,
-0.007346742320805788,
0.029738957062363625,
-0.06653112918138504,
0.01223013922572136,
0.05608872324228287,
0.08196640014648438,
-0.0901063084602356,
0.045651454478502274,
-0.0408... |
bb35471f-2280-44cc-af96-a04ed6cce294 | description: 'Documentation for WATCH Statement'
sidebar_label: 'WATCH'
sidebar_position: 53
slug: /sql-reference/statements/watch
title: 'WATCH Statement'
doc_type: 'reference'
import DeprecatedBadge from '@theme/badges/DeprecatedBadge';
WATCH Statement
This feature is deprecated and will be removed in the future.
For your convenience, the old documentation is located
here | {"source_file": "watch.md"} | [
-0.04855426773428917,
0.009380587376654148,
-0.019077671691775322,
0.08381610363721848,
0.0673307403922081,
0.1026524007320404,
0.058819323778152466,
-0.01173283439129591,
0.000031768024200573564,
-0.0052872877568006516,
-0.020076287910342216,
0.04548631235957146,
0.033123839646577835,
-0.... |
9c23037a-e03a-46b1-9ca3-0d2f482f368d | description: 'Documentation for SET Statement'
sidebar_label: 'SET'
sidebar_position: 50
slug: /sql-reference/statements/set
title: 'SET Statement'
doc_type: 'reference'
SET Statement
sql
SET param = value
Assigns
value
to the
param
setting
for the current session. You cannot change
server settings
this way.
You can also set all the values from the specified settings profile in a single query.
sql
SET profile = 'profile-name-from-the-settings-file'
For boolean settings set to true, you can use a shorthand syntax by omitting the value assignment. When only the setting name is specified, it is automatically set to
1
(true).
sql
-- These are equivalent:
SET force_index_by_date = 1
SET force_index_by_date
For more information, see
Settings
. | {"source_file": "set.md"} | [
0.05578535050153732,
0.030819032341241837,
-0.04672791063785553,
0.10915497690439224,
-0.1317550241947174,
0.06770457327365875,
0.08207416534423828,
0.08440985530614853,
-0.11671314388513565,
0.01956818625330925,
0.00041412728023715317,
-0.02843770571053028,
0.09737776219844818,
-0.0689051... |
d987f083-4af5-4c0f-985a-832e43581efe | description: 'Documentation for Set Role'
sidebar_label: 'SET ROLE'
sidebar_position: 51
slug: /sql-reference/statements/set-role
title: 'SET ROLE Statement'
doc_type: 'reference'
Activates roles for the current user.
sql
SET ROLE {DEFAULT | NONE | role [,...] | ALL | ALL EXCEPT role [,...]}
SET DEFAULT ROLE {#set-default-role}
Sets default roles to a user.
Default roles are automatically activated at user login. You can set as default only the previously granted roles. If the role isn't granted to a user, ClickHouse throws an exception.
sql
SET DEFAULT ROLE {NONE | role [,...] | ALL | ALL EXCEPT role [,...]} TO {user|CURRENT_USER} [,...]
Examples {#examples}
Set multiple default roles to a user:
sql
SET DEFAULT ROLE role1, role2, ... TO user
Set all the granted roles as default to a user:
sql
SET DEFAULT ROLE ALL TO user
Purge default roles from a user:
sql
SET DEFAULT ROLE NONE TO user
Set all the granted roles as default except for specific roles
role1
and
role2
:
sql
SET DEFAULT ROLE ALL EXCEPT role1, role2 TO user | {"source_file": "set-role.md"} | [
0.0391402468085289,
-0.06455715745687485,
0.01340185385197401,
0.042700182646512985,
-0.11912869662046432,
0.036515429615974426,
0.0717700943350792,
-0.01472658570855856,
-0.14808331429958344,
-0.048875097185373306,
0.003750920994207263,
0.0029764934442937374,
0.09228824824094772,
0.015560... |
d96cd084-571e-471f-9b82-fb33c41e84c1 | description: 'Documentation for USE Statement'
sidebar_label: 'USE'
sidebar_position: 53
slug: /sql-reference/statements/use
title: 'USE Statement'
doc_type: 'reference'
USE Statement
sql
USE [DATABASE] db
Lets you set the current database for the session.
The current database is used for searching for tables if the database is not explicitly defined in the query with a dot before the table name.
This query can't be made when using the HTTP protocol, since there is no concept of a session. | {"source_file": "use.md"} | [
-0.007880587130784988,
0.03674396872520447,
-0.08003626018762589,
0.06756925582885742,
-0.07295425236225128,
0.012815018184483051,
0.11497809737920761,
0.03712828829884529,
-0.0012336267391219735,
0.00991166103631258,
0.005418103653937578,
-0.0059183379635214806,
0.07611369341611862,
-0.04... |
5b61a4a5-5252-40a3-9219-494ea1bdc9a1 | description: 'Documentation for Check Grant'
sidebar_label: 'CHECK GRANT'
sidebar_position: 56
slug: /sql-reference/statements/check-grant
title: 'CHECK GRANT Statement'
doc_type: 'reference'
The
CHECK GRANT
query is used to check whether the current user/role has been granted a specific privilege.
Syntax {#syntax}
The basic syntax of the query is as follows:
sql
CHECK GRANT privilege[(column_name [,...])] [,...] ON {db.table[*]|db[*].*|*.*|table[*]|*}
privilege
β Type of privilege.
Examples {#examples}
If the user used to be granted the privilege, the response
check_grant
will be
1
. Otherwise, the response
check_grant
will be
0
.
If
table_1.col1
exists and current user is granted by privilege
SELECT
/
SELECT(con)
or role(with privilege), the response is
1
.
sql
CHECK GRANT SELECT(col1) ON table_1;
text
ββresultββ
β 1 β
ββββββββββ
If
table_2.col2
doesn't exists, or current user is not granted by privilege
SELECT
/
SELECT(con)
or role(with privilege), the response is
0
.
sql
CHECK GRANT SELECT(col2) ON table_2;
text
ββresultββ
β 0 β
ββββββββββ
Wildcard {#wildcard}
Specifying privileges you can use asterisk (
*
) instead of a table or a database name. Please check
WILDCARD GRANTS
for wildcard rules. | {"source_file": "check-grant.md"} | [
0.01687898300588131,
0.020560849457979202,
-0.007723632268607616,
0.0377030223608017,
-0.0218832865357399,
-0.04514952376484871,
0.1222408264875412,
0.006132469512522221,
-0.03673815727233887,
-0.011034499853849411,
0.013985816389322281,
-0.061642952263355255,
0.06256042420864105,
-0.04087... |
03aa40c3-4e6d-42cd-b4e2-b56097b9baca | description: 'Documentation for RENAME Statement'
sidebar_label: 'RENAME'
sidebar_position: 48
slug: /sql-reference/statements/rename
title: 'RENAME Statement'
doc_type: 'reference'
RENAME Statement
Renames databases, tables, or dictionaries. Several entities can be renamed in a single query.
Note that the
RENAME
query with several entities is non-atomic operation. To swap entities names atomically, use the
EXCHANGE
statement.
Syntax
sql
RENAME [DATABASE|TABLE|DICTIONARY] name TO new_name [,...] [ON CLUSTER cluster]
RENAME DATABASE {#rename-database}
Renames databases.
Syntax
sql
RENAME DATABASE atomic_database1 TO atomic_database2 [,...] [ON CLUSTER cluster]
RENAME TABLE {#rename-table}
Renames one or more tables.
Renaming tables is a light operation. If you pass a different database after
TO
, the table will be moved to this database. However, the directories with databases must reside in the same file system. Otherwise, an error is returned.
If you rename multiple tables in one query, the operation is not atomic. It may be partially executed, and queries in other sessions may get
Table ... does not exist ...
error.
Syntax
sql
RENAME TABLE [db1.]name1 TO [db2.]name2 [,...] [ON CLUSTER cluster]
Example
sql
RENAME TABLE table_A TO table_A_bak, table_B TO table_B_bak;
And you can use a simpler sql:
sql
RENAME table_A TO table_A_bak, table_B TO table_B_bak;
RENAME DICTIONARY {#rename-dictionary}
Renames one or several dictionaries. This query can be used to move dictionaries between databases.
Syntax
sql
RENAME DICTIONARY [db0.]dict_A TO [db1.]dict_B [,...] [ON CLUSTER cluster]
See Also
Dictionaries | {"source_file": "rename.md"} | [
0.058010030537843704,
-0.06190846487879753,
0.02419905550777912,
0.0559798888862133,
-0.051029328256845474,
-0.03742705658078194,
0.0655239075422287,
0.00726915942505002,
0.033936381340026855,
0.04414175823330879,
0.08911533653736115,
-0.009429484605789185,
0.07005278766155243,
-0.06811654... |
a0d1e871-45fb-448d-a5d4-b98ddcac8cd4 | description: 'Documentation for ClickHouse SQL Statements'
sidebar_label: 'List of statements'
sidebar_position: 1
slug: /sql-reference/statements/
title: 'ClickHouse SQL Statements'
doc_type: 'reference'
ClickHouse SQL Statements
Users interact with ClickHouse using SQL statements. ClickHouse supports common SQL statements like
SELECT
and
CREATE
, but it also provides specialized statements like
KILL
and
OPTIMIZE
. | {"source_file": "index.md"} | [
0.02285257913172245,
-0.052681367844343185,
-0.054703906178474426,
0.06334425508975983,
-0.028495818376541138,
-0.01852821372449398,
0.07861415296792984,
0.014609599485993385,
-0.0792018473148346,
-0.010952954180538654,
0.017266562208533287,
0.01776058040559292,
0.04775676876306534,
-0.078... |
e0d129cb-d88e-4e58-a321-0a418ee9e0c1 | description: 'Documentation for EXISTS Statement'
sidebar_label: 'EXISTS'
sidebar_position: 45
slug: /sql-reference/statements/exists
title: 'EXISTS Statement'
doc_type: 'reference'
EXISTS Statement
sql
EXISTS [TEMPORARY] [TABLE|DICTIONARY|DATABASE] [db.]name [INTO OUTFILE filename] [FORMAT format]
Returns a single
UInt8
-type column, which contains the single value
0
if the table or database does not exist, or
1
if the table exists in the specified database. | {"source_file": "exists.md"} | [
0.0038585374131798744,
0.01579095609486103,
-0.0936727374792099,
0.07268772274255753,
0.015791088342666626,
0.0014322432689368725,
0.02724970132112503,
0.07822324335575104,
0.025067416951060295,
0.0037330614868551493,
0.060938239097595215,
-0.06648807227611542,
0.0833326056599617,
-0.09909... |
6903d564-c13b-47bd-9912-e7b3bfd76ea5 | description: 'Documentation for Optimize'
sidebar_label: 'OPTIMIZE'
sidebar_position: 47
slug: /sql-reference/statements/optimize
title: 'OPTIMIZE Statement'
doc_type: 'reference'
This query tries to initialize an unscheduled merge of data parts for tables. Note that we generally recommend against using
OPTIMIZE TABLE ... FINAL
(see these
docs
) as its use case is meant for administration, not for daily operations.
:::note
OPTIMIZE
can't fix the
Too many parts
error.
:::
Syntax
sql
OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition | PARTITION ID 'partition_id'] [FINAL | FORCE] [DEDUPLICATE [BY expression]]
The
OPTIMIZE
query is supported for
MergeTree
family (including
materialized views
) and the
Buffer
engines. Other table engines aren't supported.
When
OPTIMIZE
is used with the
ReplicatedMergeTree
family of table engines, ClickHouse creates a task for merging and waits for execution on all replicas (if the
alter_sync
setting is set to
2
) or on current replica (if the
alter_sync
setting is set to
1
).
If
OPTIMIZE
does not perform a merge for any reason, it does not notify the client. To enable notifications, use the
optimize_throw_if_noop
setting.
If you specify a
PARTITION
, only the specified partition is optimized.
How to set partition expression
.
If you specify
FINAL
or
FORCE
, optimization is performed even when all the data is already in one part. You can control this behaviour with
optimize_skip_merged_partitions
. Also, the merge is forced even if concurrent merges are performed.
If you specify
DEDUPLICATE
, then completely identical rows (unless by-clause is specified) will be deduplicated (all columns are compared), it makes sense only for the MergeTree engine.
You can specify how long (in seconds) to wait for inactive replicas to execute
OPTIMIZE
queries by the
replication_wait_for_inactive_replica_timeout
setting.
:::note
If the
alter_sync
is set to
2
and some replicas are not active for more than the time, specified by the
replication_wait_for_inactive_replica_timeout
setting, then an exception
UNFINISHED
is thrown.
:::
BY expression {#by-expression}
If you want to perform deduplication on custom set of columns rather than on all, you can specify list of columns explicitly or use any combination of
*
,
COLUMNS
or
EXCEPT
expressions. The explicitly written or implicitly expanded list of columns must include all columns specified in row ordering expression (both primary and sorting keys) and partitioning expression (partitioning key).
:::note
Notice that
*
behaves just like in
SELECT
:
MATERIALIZED
and
ALIAS
columns are not used for expansion.
Also, it is an error to specify empty list of columns, or write an expression that results in an empty list of columns, or deduplicate by an
ALIAS
column.
:::
Syntax | {"source_file": "optimize.md"} | [
0.02750357985496521,
-0.0062935128808021545,
0.01888437755405903,
0.036270689219236374,
-0.00583071680739522,
-0.08187677711248398,
0.04244433343410492,
0.042878638952970505,
-0.04409363493323326,
0.015717552974820137,
0.01377086341381073,
-0.008505024015903473,
0.026173653081059456,
-0.05... |
a44efd2e-2260-4a1f-a9f5-b5d6e3a024e3 | Also, it is an error to specify empty list of columns, or write an expression that results in an empty list of columns, or deduplicate by an
ALIAS
column.
:::
Syntax
sql
OPTIMIZE TABLE table DEDUPLICATE; -- all columns
OPTIMIZE TABLE table DEDUPLICATE BY *; -- excludes MATERIALIZED and ALIAS columns
OPTIMIZE TABLE table DEDUPLICATE BY colX,colY,colZ;
OPTIMIZE TABLE table DEDUPLICATE BY * EXCEPT colX;
OPTIMIZE TABLE table DEDUPLICATE BY * EXCEPT (colX, colY);
OPTIMIZE TABLE table DEDUPLICATE BY COLUMNS('column-matched-by-regex');
OPTIMIZE TABLE table DEDUPLICATE BY COLUMNS('column-matched-by-regex') EXCEPT colX;
OPTIMIZE TABLE table DEDUPLICATE BY COLUMNS('column-matched-by-regex') EXCEPT (colX, colY);
Examples
Consider the table:
sql
CREATE TABLE example (
primary_key Int32,
secondary_key Int32,
value UInt32,
partition_key UInt32,
materialized_value UInt32 MATERIALIZED 12345,
aliased_value UInt32 ALIAS 2,
PRIMARY KEY primary_key
) ENGINE=MergeTree
PARTITION BY partition_key
ORDER BY (primary_key, secondary_key);
sql
INSERT INTO example (primary_key, secondary_key, value, partition_key)
VALUES (0, 0, 0, 0), (0, 0, 0, 0), (1, 1, 2, 2), (1, 1, 2, 3), (1, 1, 3, 3);
sql
SELECT * FROM example;
Result:
```sql
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 0 β 0 β 0 β 0 β
β 0 β 0 β 0 β 0 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 2 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 3 β
β 1 β 1 β 3 β 3 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
```
All following examples are executed against this state with 5 rows.
DEDUPLICATE
{#deduplicate}
When columns for deduplication are not specified, all of them are taken into account. The row is removed only if all values in all columns are equal to corresponding values in the previous row:
sql
OPTIMIZE TABLE example FINAL DEDUPLICATE;
sql
SELECT * FROM example;
Result:
response
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 2 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 0 β 0 β 0 β 0 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 3 β
β 1 β 1 β 3 β 3 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
DEDUPLICATE BY *
{#deduplicate-by-} | {"source_file": "optimize.md"} | [
0.011375212110579014,
0.020677246153354645,
0.00911223515868187,
-0.018360963091254234,
-0.015748102217912674,
-0.06954652816057205,
0.0581534281373024,
0.0608673132956028,
-0.012309761717915535,
0.061982329934835434,
0.049236711114645004,
-0.02690749056637287,
0.09417020529508591,
-0.0654... |
fdcc9cfc-3d0b-4694-94fb-77a59a0bc010 | DEDUPLICATE BY *
{#deduplicate-by-}
When columns are specified implicitly, the table is deduplicated by all columns that are not
ALIAS
or
MATERIALIZED
. Considering the table above, these are
primary_key
,
secondary_key
,
value
, and
partition_key
columns:
sql
OPTIMIZE TABLE example FINAL DEDUPLICATE BY *;
sql
SELECT * FROM example;
Result:
response
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 2 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 0 β 0 β 0 β 0 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 3 β
β 1 β 1 β 3 β 3 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
DEDUPLICATE BY * EXCEPT
{#deduplicate-by--except}
Deduplicate by all columns that are not
ALIAS
or
MATERIALIZED
and explicitly not
value
:
primary_key
,
secondary_key
, and
partition_key
columns.
sql
OPTIMIZE TABLE example FINAL DEDUPLICATE BY * EXCEPT value;
sql
SELECT * FROM example;
Result:
response
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 2 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 0 β 0 β 0 β 0 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 3 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
DEDUPLICATE BY <list of columns>
{#deduplicate-by-list-of-columns}
Deduplicate explicitly by
primary_key
,
secondary_key
, and
partition_key
columns:
sql
OPTIMIZE TABLE example FINAL DEDUPLICATE BY primary_key, secondary_key, partition_key;
sql
SELECT * FROM example;
Result:
response
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 2 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 0 β 0 β 0 β 0 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 3 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
DEDUPLICATE BY COLUMNS(<regex>)
{#deduplicate-by-columnsregex}
Deduplicate by all columns matching a regex:
primary_key
,
secondary_key
, and
partition_key
columns:
sql
OPTIMIZE TABLE example FINAL DEDUPLICATE BY COLUMNS('.*_key');
sql
SELECT * FROM example;
Result: | {"source_file": "optimize.md"} | [
-0.020420433953404427,
-0.01151272188872099,
0.015700388699769974,
-0.009482868015766144,
-0.0009340166579931974,
-0.1248195543885231,
0.03564534708857536,
0.01743903197348118,
-0.011267926543951035,
0.031873855739831924,
0.08437497913837433,
0.009099439717829227,
0.04171779006719589,
-0.0... |
4d87a7ca-1676-494a-86dd-c2166721844e | sql
OPTIMIZE TABLE example FINAL DEDUPLICATE BY COLUMNS('.*_key');
sql
SELECT * FROM example;
Result:
response
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 0 β 0 β 0 β 0 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 2 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ
ββprimary_keyββ¬βsecondary_keyββ¬βvalueββ¬βpartition_keyββ
β 1 β 1 β 2 β 3 β
βββββββββββββββ΄ββββββββββββββββ΄ββββββββ΄ββββββββββββββββ | {"source_file": "optimize.md"} | [
0.021190425381064415,
-0.015636257827281952,
-0.01336949784308672,
0.017475666478276253,
-0.06203790009021759,
-0.09797690808773041,
0.0825750082731247,
-0.016877979040145874,
-0.010011591948568821,
0.03167478367686272,
0.11920204758644104,
0.02781723439693451,
0.05447356402873993,
-0.1216... |
f65ec240-c8ad-4179-b587-cd6b0a67c013 | description: 'Lightweight deletes simplify the process of deleting data from the database.'
keywords: ['delete']
sidebar_label: 'DELETE'
sidebar_position: 36
slug: /sql-reference/statements/delete
title: 'The Lightweight DELETE Statement'
doc_type: 'reference'
The lightweight
DELETE
statement removes rows from the table
[db.]table
that match the expression
expr
. It is only available for the *MergeTree table engine family.
sql
DELETE FROM [db.]table [ON CLUSTER cluster] [IN PARTITION partition_expr] WHERE expr;
It is called "lightweight
DELETE
" to contrast it to the
ALTER TABLE ... DELETE
command, which is a heavyweight process.
Examples {#examples}
sql
-- Deletes all rows from the `hits` table where the `Title` column contains the text `hello`
DELETE FROM hits WHERE Title LIKE '%hello%';
Lightweight
DELETE
does not delete data immediately {#lightweight-delete-does-not-delete-data-immediately}
Lightweight
DELETE
is implemented as a
mutation
that marks rows as deleted but does not immediately physically delete them.
By default,
DELETE
statements wait until marking the rows as deleted is completed before returning. This can take a long time if the amount of data is large. Alternatively, you can run it asynchronously in the background using the setting
lightweight_deletes_sync
. If disabled, the
DELETE
statement is going to return immediately, but the data can still be visible to queries until the background mutation is finished.
The mutation does not physically delete the rows that have been marked as deleted, this will only happen during the next merge. As a result, it is possible that for an unspecified period, data is not actually deleted from storage and is only marked as deleted.
If you need to guarantee that your data is deleted from storage in a predictable time, consider using the table setting
min_age_to_force_merge_seconds
. Or you can use the
ALTER TABLE ... DELETE
command. Note that deleting data using
ALTER TABLE ... DELETE
may consume significant resources as it recreates all affected parts.
Deleting large amounts of data {#deleting-large-amounts-of-data}
Large deletes can negatively affect ClickHouse performance. If you are attempting to delete all rows from a table, consider using the
TRUNCATE TABLE
command.
If you anticipate frequent deletes, consider using a
custom partitioning key
. You can then use the
ALTER TABLE ... DROP PARTITION
command to quickly drop all rows associated with that partition.
Limitations of lightweight
DELETE
{#limitations-of-lightweight-delete}
Lightweight
DELETE
s with projections {#lightweight-deletes-with-projections}
By default,
DELETE
does not work for tables with projections. This is because rows in a projection may be affected by a
DELETE
operation. But there is a
MergeTree setting
lightweight_mutation_projection_mode
to change the behavior. | {"source_file": "delete.md"} | [
-0.015349290333688259,
-0.007453207857906818,
0.004618214908987284,
0.07113650441169739,
0.0611790306866169,
-0.11507324129343033,
0.04973997548222542,
-0.05087713524699211,
0.03242511302232742,
0.02548290230333805,
0.0596906878054142,
0.018077518790960312,
0.05402759462594986,
-0.07404165... |
7294db9f-1acb-4272-9b7a-119d6f30602f | Performance considerations when using lightweight
DELETE
{#performance-considerations-when-using-lightweight-delete}
Deleting large volumes of data with the lightweight
DELETE
statement can negatively affect SELECT query performance.
The following can also negatively impact lightweight
DELETE
performance:
A heavy
WHERE
condition in a
DELETE
query.
If the mutations queue is filled with many other mutations, this can possibly lead to performance issues as all mutations on a table are executed sequentially.
The affected table has a very large number of data parts.
Having a lot of data in compact parts. In a Compact part, all columns are stored in one file.
Delete permissions {#delete-permissions}
DELETE
requires the
ALTER DELETE
privilege. To enable
DELETE
statements on a specific table for a given user, run the following command:
sql
GRANT ALTER DELETE ON db.table to username;
How lightweight DELETEs work internally in ClickHouse {#how-lightweight-deletes-work-internally-in-clickhouse}
A "mask" is applied to affected rows
When a
DELETE FROM table ...
query is executed, ClickHouse saves a mask where each row is marked as either "existing" or as "deleted". Those "deleted" rows are omitted for subsequent queries. However, rows are actually only removed later by subsequent merges. Writing this mask is much more lightweight than what is done by an
ALTER TABLE ... DELETE
query.
The mask is implemented as a hidden
_row_exists
system column that stores
True
for all visible rows and
False
for deleted ones. This column is only present in a part if some rows in the part were deleted. This column does not exist when a part has all values equal to
True
.
SELECT
queries are transformed to include the mask
When a masked column is used in a query, the
SELECT ... FROM table WHERE condition
query internally is extended by the predicate on
_row_exists
and is transformed to:
sql
SELECT ... FROM table PREWHERE _row_exists WHERE condition
At execution time, the column
_row_exists
is read to determine which rows should not be returned. If there are many deleted rows, ClickHouse can determine which granules can be fully skipped when reading the rest of the columns.
DELETE
queries are transformed to
ALTER TABLE ... UPDATE
queries
The
DELETE FROM table WHERE condition
is translated into an
ALTER TABLE table UPDATE _row_exists = 0 WHERE condition
mutation.
Internally, this mutation is executed in two steps:
A
SELECT count() FROM table WHERE condition
command is executed for each individual part to determine if the part is affected.
Based on the commands above, affected parts are then mutated, and hardlinks are created for unaffected parts. In the case of wide parts, the
_row_exists
column for each row is updated, and all other columns' files are hardlinked. For compact parts, all columns are re-written because they are all stored together in one file. | {"source_file": "delete.md"} | [
-0.00930482055991888,
0.013259769417345524,
-0.04203273355960846,
0.041459403932094574,
0.039248026907444,
-0.12325925379991531,
0.0465213805437088,
-0.07108265161514282,
0.02807978354394436,
-0.00046021046000532806,
0.051913920789957047,
0.01648419164121151,
0.05409357696771622,
-0.041599... |
5e537fab-cc30-4782-b65c-e1f46e536523 | From the steps above, we can see that lightweight
DELETE
using the masking technique improves performance over traditional
ALTER TABLE ... DELETE
because it does not re-write all the columns' files for affected parts.
Related content {#related-content}
Blog:
Handling Updates and Deletes in ClickHouse | {"source_file": "delete.md"} | [
0.002204200020059943,
0.0038692352827638388,
-0.029061879962682724,
0.016613977029919624,
0.1035730168223381,
-0.07413965463638306,
-0.043197546154260635,
-0.10379309207201004,
0.007309907581657171,
0.03792395070195198,
0.06714877486228943,
0.0767396092414856,
0.045582618564367294,
-0.0409... |
c032ed4b-4661-4ac7-abf8-770edd507ae7 | description: 'Lightweight updates simplify the process of updating data in the database using patch parts.'
keywords: ['update']
sidebar_label: 'UPDATE'
sidebar_position: 39
slug: /sql-reference/statements/update
title: 'The Lightweight UPDATE Statement'
doc_type: 'reference'
import BetaBadge from '@theme/badges/BetaBadge';
:::note
Lightweight updates are currently beta.
If you run into problems, kindly open an issue in the
ClickHouse repository
.
:::
The lightweight
UPDATE
statement updates rows in a table
[db.]table
that match the expression
filter_expr
.
It is called "lightweight update" to contrast it to the
ALTER TABLE ... UPDATE
query, which is a heavyweight process that rewrites entire columns in data parts.
It is only available for the
MergeTree
table engine family.
sql
UPDATE [db.]table [ON CLUSTER cluster] SET column1 = expr1 [, ...] [IN PARTITION partition_expr] WHERE filter_expr;
The
filter_expr
must be of type
UInt8
. This query updates values of the specified columns to the values of the corresponding expressions in rows for which the
filter_expr
takes a non-zero value.
Values are cast to the column type using the
CAST
operator. Updating columns used in the calculation of the primary or partition keys is not supported.
Examples {#examples}
```sql
UPDATE hits SET Title = 'Updated Title' WHERE EventDate = today();
UPDATE wikistat SET hits = hits + 1, time = now() WHERE path = 'ClickHouse';
```
Lightweight updates do not update data immediately {#lightweight-update-does-not-update-data-immediately}
Lightweight
UPDATE
is implemented using
patch parts
- a special kind of data part that contains only the updated columns and rows.
A lightweight
UPDATE
creates patch parts but does not immediately modify the original data physically in storage.
The process of updating is similar to a
INSERT ... SELECT ...
query but the
UPDATE
query waits until the patch part creation is completed before returning.
The updated values are:
-
Immediately visible
in
SELECT
queries through patches application
-
Physically materialized
only during subsequent merges and mutations
-
Automatically cleaned up
once all active parts have the patches materialized
Lightweight updates requirements {#lightweight-update-requirements}
Lightweight updates are supported for
MergeTree
,
ReplacingMergeTree
,
CollapsingMergeTree
engines and their
Replicated
and
Shared
versions.
To use lightweight updates, materialization of
_block_number
and
_block_offset
columns must be enabled using table settings
enable_block_number_column
and
enable_block_offset_column
.
Lightweight deletes {#lightweight-delete}
A
lightweight
DELETE
query can be run as a lightweight
UPDATE
instead of a
ALTER UPDATE
mutation. The implementation of lightweight
DELETE
is controlled by setting
lightweight_delete_mode
.
Performance considerations {#performance-considerations}
Advantages of lightweight updates: | {"source_file": "update.md"} | [
-0.004280535038560629,
-0.00642660865560174,
0.01571909338235855,
0.05228729918599129,
0.004641788080334663,
-0.06719039380550385,
0.06480910629034042,
-0.0019687763415277004,
-0.0890253484249115,
0.06564518809318542,
0.033143557608127594,
-0.018174126744270325,
0.031214691698551178,
-0.15... |
62846310-4706-4878-a23f-0a04ce14b6ba | Performance considerations {#performance-considerations}
Advantages of lightweight updates:
- The latency of the update is comparable to the latency of the
INSERT ... SELECT ...
query
- Only updated columns and values are written, not entire columns in data parts
- No need to wait for currently running merges/mutations to complete, therefore the latency of an update is predictable
- Parallel execution of lightweight updates is possible
Potential performance impacts:
- Adds an overhead to
SELECT
queries that need to apply patches
-
Skipping indexes
will not be used for columns in data parts that have patches to be applied.
Projections
will not be used if there are patch parts for table, including for data parts that don't have patches to be applied.
- Small updates which are too frequent may lead to a "too many parts" error. It is recommended to batch several updates into a single query, for example by putting ids for updates in a single
IN
clause in the
WHERE
clause
- Lightweight updates are designed to update small amounts of rows (up to about 10% of the table). If you need to update a larger amount, it is recommended to use the
ALTER TABLE ... UPDATE
mutation
Concurrent operations {#concurrent-operations}
Lightweight updates don't wait for currently running merges/mutations to complete unlike heavy mutations.
The consistency of concurrent lightweight updates is controlled by settings
update_sequential_consistency
and
update_parallel_mode
.
Update permissions {#update-permissions}
UPDATE
requires the
ALTER UPDATE
privilege. To enable
UPDATE
statements on a specific table for a given user, run:
sql
GRANT ALTER UPDATE ON db.table TO username;
Details of the implementation {#details-of-the-implementation}
Patch parts are the same as the regular parts, but contain only updated columns and several system columns:
-
_part
- the name of the original part
-
_part_offset
- the row number in the original part
-
_block_number
- the block number of the row in the original part
-
_block_offset
- the block offset of the row in the original part
-
_data_version
- the data version of the updated data (block number allocated for the
UPDATE
query)
On average it gives about 40 bytes (uncompressed data) of overhead per updated row in the patch parts.
System columns help to find rows in the original part which should be updated.
System columns are related to the
virtual columns
in the original part, which are added for reading if patch parts should be applied.
Patch parts are sorted by
_part
and
_part_offset
. | {"source_file": "update.md"} | [
-0.035250723361968994,
0.021015077829360962,
-0.006638084538280964,
0.019641088321805,
0.046437978744506836,
-0.07177306711673737,
-0.032084617763757706,
0.00037134249578230083,
0.023996740579605103,
0.063560351729393,
0.02310873754322529,
0.058605827391147614,
-0.009757915511727333,
-0.11... |
5a6434da-6f14-4425-98a9-c88ffd9b8379 | Patch parts belong to different partitions than the original part.
The partition id of the patch part is
patch-<hash of column names in patch part>-<original_partition_id>
.
Therefore patch parts with different columns are stored in different partitions.
For example three updates
SET x = 1 WHERE <cond>
,
SET y = 1 WHERE <cond>
and
SET x = 1, y = 1 WHERE <cond>
will create three patch parts in three different partitions.
Patch parts can be merged among themselves to reduce the amount of applied patches on
SELECT
queries and reduce the overhead. Merging of patch parts uses the
replacing
merge algorithm with
_data_version
as a version column.
Therefore patch parts always store the latest version for each updated row in the part.
Lightweight updates don't wait for currently running merges and mutations to finish and always use a current snapshot of data parts to execute an update and produce a patch part.
Because of that there can be two cases of applying patch parts.
For example if we read part
A
, we need to apply patch part
X
:
- if
X
contains part
A
itself. It happens if
A
was not participating in merge when
UPDATE
was executed.
- if
X
contains part
B
and
C
, which are covered by part
A
. It happens if there was a merge (
B
,
C
) ->
A
running when
UPDATE
was executed.
For these two cases there are two ways to apply patch parts respectively:
- Using merge by sorted columns
_part
,
_part_offset
.
- Using join by
_block_number
,
_block_offset
columns.
The join mode is slower and requires more memory than the merge mode, but it is used less often.
Related Content {#related-content}
ALTER UPDATE
- Heavy
UPDATE
operations
Lightweight
DELETE
- Lightweight
DELETE
operations | {"source_file": "update.md"} | [
-0.03958180174231529,
-0.005365775898098946,
0.051286809146404266,
-0.0606013722717762,
0.02916116639971733,
-0.08397934585809708,
0.002291994635015726,
-0.012919708155095577,
0.008243030868470669,
0.005866959225386381,
0.026538345962762833,
0.0798228457570076,
0.011697886511683464,
-0.100... |
2d9b7d34-9729-4ac4-a795-6c940baee9ba | description: 'Documentation for SYSTEM Statements'
sidebar_label: 'SYSTEM'
sidebar_position: 36
slug: /sql-reference/statements/system
title: 'SYSTEM Statements'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
SYSTEM Statements
SYSTEM RELOAD EMBEDDED DICTIONARIES {#reload-embedded-dictionaries}
Reload all
Internal dictionaries
.
By default, internal dictionaries are disabled.
Always returns
Ok.
regardless of the result of the internal dictionary update.
SYSTEM RELOAD DICTIONARIES {#reload-dictionaries}
Reloads all dictionaries that have been successfully loaded before.
By default, dictionaries are loaded lazily (see
dictionaries_lazy_load
), so instead of being loaded automatically at startup, they are initialized on first access through dictGet function or SELECT from tables with ENGINE = Dictionary. The
SYSTEM RELOAD DICTIONARIES
query reloads such dictionaries (LOADED).
Always returns
Ok.
regardless of the result of the dictionary update.
Syntax
sql
SYSTEM RELOAD DICTIONARIES [ON CLUSTER cluster_name]
SYSTEM RELOAD DICTIONARY {#reload-dictionary}
Completely reloads a dictionary
dictionary_name
, regardless of the state of the dictionary (LOADED / NOT_LOADED / FAILED).
Always returns
Ok.
regardless of the result of updating the dictionary.
sql
SYSTEM RELOAD DICTIONARY [ON CLUSTER cluster_name] dictionary_name
The status of the dictionary can be checked by querying the
system.dictionaries
table.
sql
SELECT name, status FROM system.dictionaries;
SYSTEM RELOAD MODELS {#reload-models}
:::note
This statement and
SYSTEM RELOAD MODEL
merely unload catboost models from the clickhouse-library-bridge. The function
catboostEvaluate()
loads a model upon first access if it is not loaded yet.
:::
Unloads all CatBoost models.
Syntax
sql
SYSTEM RELOAD MODELS [ON CLUSTER cluster_name]
SYSTEM RELOAD MODEL {#reload-model}
Unloads a CatBoost model at
model_path
.
Syntax
sql
SYSTEM RELOAD MODEL [ON CLUSTER cluster_name] <model_path>
SYSTEM RELOAD FUNCTIONS {#reload-functions}
Reloads all registered
executable user defined functions
or one of them from a configuration file.
Syntax
sql
SYSTEM RELOAD FUNCTIONS [ON CLUSTER cluster_name]
SYSTEM RELOAD FUNCTION [ON CLUSTER cluster_name] function_name
SYSTEM RELOAD ASYNCHRONOUS METRICS {#reload-asynchronous-metrics}
Re-calculates all
asynchronous metrics
. Since asynchronous metrics are periodically updated based on setting
asynchronous_metrics_update_period_s
, updating them manually using this statement is typically not necessary.
sql
SYSTEM RELOAD ASYNCHRONOUS METRICS [ON CLUSTER cluster_name]
SYSTEM DROP DNS CACHE {#drop-dns-cache}
Clears ClickHouse's internal DNS cache. Sometimes (for old ClickHouse versions) it is necessary to use this command when changing the infrastructure (changing the IP address of another ClickHouse server or the server used by dictionaries). | {"source_file": "system.md"} | [
0.007980979047715664,
-0.037991926074028015,
-0.031271010637283325,
0.05176457762718201,
-0.010598498396575451,
-0.07118739932775497,
0.03113173507153988,
-0.04303041100502014,
-0.01968165673315525,
0.02469467744231224,
0.05687809735536575,
0.06007016450166702,
0.058681514114141464,
-0.055... |
2f3553c2-876a-4ce1-8c53-4ad84b7ae89b | For more convenient (automatic) cache management, see
disable_internal_dns_cache
,
dns_cache_max_entries
,
dns_cache_update_period
parameters.
SYSTEM DROP MARK CACHE {#drop-mark-cache}
Clears the mark cache.
SYSTEM DROP ICEBERG METADATA CACHE {#drop-iceberg-metadata-cache}
Clears the iceberg metadata cache.
SYSTEM DROP TEXT INDEX DICTIONARY CACHE {#drop-text-index-dictionary-cache}
Clears the text index dictionary cache.
SYSTEM DROP TEXT INDEX HEADER CACHE {#drop-text-index-header-cache}
Clears the text index header cache.
SYSTEM DROP TEXT INDEX POSTINGS CACHE {#drop-text-index-postings-cache}
Clears the text index postings cache.
SYSTEM DROP REPLICA {#drop-replica}
Dead replicas of
ReplicatedMergeTree
tables can be dropped using following syntax:
sql
SYSTEM DROP REPLICA 'replica_name' FROM TABLE database.table;
SYSTEM DROP REPLICA 'replica_name' FROM DATABASE database;
SYSTEM DROP REPLICA 'replica_name';
SYSTEM DROP REPLICA 'replica_name' FROM ZKPATH '/path/to/table/in/zk';
Queries will remove the
ReplicatedMergeTree
replica path in ZooKeeper. It is useful when the replica is dead and its metadata cannot be removed from ZooKeeper by
DROP TABLE
because there is no such table anymore. It will only drop the inactive/stale replica, and it cannot drop local replica, please use
DROP TABLE
for that.
DROP REPLICA
does not drop any tables and does not remove any data or metadata from disk.
The first one removes metadata of
'replica_name'
replica of
database.table
table.
The second one does the same for all replicated tables in the database.
The third one does the same for all replicated tables on the local server.
The fourth one is useful to remove metadata of dead replica when all other replicas of a table were dropped. It requires the table path to be specified explicitly. It must be the same path as was passed to the first argument of
ReplicatedMergeTree
engine on table creation.
SYSTEM DROP DATABASE REPLICA {#drop-database-replica}
Dead replicas of
Replicated
databases can be dropped using following syntax:
sql
SYSTEM DROP DATABASE REPLICA 'replica_name' [FROM SHARD 'shard_name'] FROM DATABASE database;
SYSTEM DROP DATABASE REPLICA 'replica_name' [FROM SHARD 'shard_name'];
SYSTEM DROP DATABASE REPLICA 'replica_name' [FROM SHARD 'shard_name'] FROM ZKPATH '/path/to/table/in/zk'; | {"source_file": "system.md"} | [
-0.045490264892578125,
0.04159369692206383,
-0.024740757420659065,
0.004990313667804003,
-0.04219413176178932,
-0.06331390887498856,
0.002404256723821163,
-0.049860529601573944,
-0.015242837369441986,
0.06912409514188766,
0.03258087858557701,
0.03933924436569214,
0.029188338667154312,
-0.0... |
edd574f4-9ba7-456e-b317-7820e0ba5139 | Similar to
SYSTEM DROP REPLICA
, but removes the
Replicated
database replica path from ZooKeeper when there's no database to run
DROP DATABASE
. Please note that it does not remove
ReplicatedMergeTree
replicas (so you may need
SYSTEM DROP REPLICA
as well). Shard and replica names are the names that were specified in
Replicated
engine arguments when creating the database. Also, these names can be obtained from
database_shard_name
and
database_replica_name
columns in
system.clusters
. If the
FROM SHARD
clause is missing, then
replica_name
must be a full replica name in
shard_name|replica_name
format.
SYSTEM DROP UNCOMPRESSED CACHE {#drop-uncompressed-cache}
Clears the uncompressed data cache.
The uncompressed data cache is enabled/disabled with the query/user/profile-level setting
use_uncompressed_cache
.
Its size can be configured using the server-level setting
uncompressed_cache_size
.
SYSTEM DROP COMPILED EXPRESSION CACHE {#drop-compiled-expression-cache}
Clears the compiled expression cache.
The compiled expression cache is enabled/disabled with the query/user/profile-level setting
compile_expressions
.
SYSTEM DROP QUERY CONDITION CACHE {#drop-query-condition-cache}
Clears the query condition cache.
SYSTEM DROP QUERY CACHE {#drop-query-cache}
```sql
SYSTEM DROP QUERY CACHE;
SYSTEM DROP QUERY CACHE TAG '
'
````
Clears the
query cache
.
If a tag is specified, only query cache entries with the specified tag are deleted.
SYSTEM DROP FORMAT SCHEMA CACHE {#system-drop-schema-format}
Clears cache for schemas loaded from
format_schema_path
.
Supported targets:
- Protobuf: Removes imported Protobuf message definitions from memory.
- Files: Deletes cached schema files stored locally in the
format_schema_path
, generated when
format_schema_source
is set to
query
.
Note: If no target is specified, both caches are cleared.
sql
SYSTEM DROP FORMAT SCHEMA CACHE [FOR Protobuf/Files]
SYSTEM FLUSH LOGS {#flush-logs}
Flushes buffered log messages to system tables, e.g. system.query_log. Mainly useful for debugging since most system tables have a default flush interval of 7.5 seconds.
This will also create system tables even if message queue is empty.
sql
SYSTEM FLUSH LOGS [ON CLUSTER cluster_name] [log_name|[database.table]] [, ...]
If you don't want to flush everything, you can flush one or more individual logs by passing either their name or their target table:
sql
SYSTEM FLUSH LOGS query_log, system.query_views_log;
SYSTEM RELOAD CONFIG {#reload-config}
Reloads ClickHouse configuration. Used when configuration is stored in ZooKeeper. Note that
SYSTEM RELOAD CONFIG
does not reload
USER
configuration stored in ZooKeeper, it only reloads
USER
configuration that is stored in
users.xml
. To reload all
USER
config use
SYSTEM RELOAD USERS
sql
SYSTEM RELOAD CONFIG [ON CLUSTER cluster_name]
SYSTEM RELOAD USERS {#reload-users} | {"source_file": "system.md"} | [
-0.0049987914972007275,
0.010701204650104046,
0.002054633107036352,
0.02520623616874218,
0.014782769605517387,
-0.0386168472468853,
-0.03178752213716507,
-0.050153665244579315,
0.004938118159770966,
0.02725774236023426,
-0.010203556157648563,
0.009312845766544342,
0.06869141757488251,
-0.0... |
9132a43d-f2f6-45b9-8c81-41d58a72f512 | sql
SYSTEM RELOAD CONFIG [ON CLUSTER cluster_name]
SYSTEM RELOAD USERS {#reload-users}
Reloads all access storages, including: users.xml, local disk access storage, replicated (in ZooKeeper) access storage.
sql
SYSTEM RELOAD USERS [ON CLUSTER cluster_name]
SYSTEM SHUTDOWN {#shutdown}
Normally shuts down ClickHouse (like
service clickhouse-server stop
/
kill {$pid_clickhouse-server}
)
SYSTEM KILL {#kill}
Aborts ClickHouse process (like
kill -9 {$ pid_clickhouse-server}
)
Managing Distributed Tables {#managing-distributed-tables}
ClickHouse can manage
distributed
tables. When a user inserts data into these tables, ClickHouse first creates a queue of the data that should be sent to cluster nodes, then asynchronously sends it. You can manage queue processing with the
STOP DISTRIBUTED SENDS
,
FLUSH DISTRIBUTED
, and
START DISTRIBUTED SENDS
queries. You can also synchronously insert distributed data with the
distributed_foreground_insert
setting.
SYSTEM STOP DISTRIBUTED SENDS {#stop-distributed-sends}
Disables background data distribution when inserting data into distributed tables.
sql
SYSTEM STOP DISTRIBUTED SENDS [db.]<distributed_table_name> [ON CLUSTER cluster_name]
:::note
In case of
prefer_localhost_replica
is enabled (the default), the data to local shard will be inserted anyway.
:::
SYSTEM FLUSH DISTRIBUTED {#flush-distributed}
Forces ClickHouse to send data to cluster nodes synchronously. If any nodes are unavailable, ClickHouse throws an exception and stops query execution. You can retry the query until it succeeds, which will happen when all nodes are back online.
You can also override some settings via
SETTINGS
clause, this can be useful to avoid some temporary limitations, like
max_concurrent_queries_for_all_users
or
max_memory_usage
.
sql
SYSTEM FLUSH DISTRIBUTED [db.]<distributed_table_name> [ON CLUSTER cluster_name] [SETTINGS ...]
:::note
Each pending block is stored in disk with settings from the initial INSERT query, so that is why sometimes you may want to override settings.
:::
SYSTEM START DISTRIBUTED SENDS {#start-distributed-sends}
Enables background data distribution when inserting data into distributed tables.
sql
SYSTEM START DISTRIBUTED SENDS [db.]<distributed_table_name> [ON CLUSTER cluster_name]
SYSTEM STOP LISTEN {#stop-listen}
Closes the socket and gracefully terminates the existing connections to the server on the specified port with the specified protocol.
However, if the corresponding protocol settings were not specified in the clickhouse-server configuration, this command will have no effect.
sql
SYSTEM STOP LISTEN [ON CLUSTER cluster_name] [QUERIES ALL | QUERIES DEFAULT | QUERIES CUSTOM | TCP | TCP WITH PROXY | TCP SECURE | HTTP | HTTPS | MYSQL | GRPC | POSTGRESQL | PROMETHEUS | CUSTOM 'protocol'] | {"source_file": "system.md"} | [
0.04579711705446243,
-0.0585627518594265,
-0.06052076816558838,
0.062294505536556244,
-0.04574044048786163,
-0.07192027568817139,
0.04654707387089729,
-0.037502534687519073,
0.001562981284223497,
0.1081204041838646,
0.032933976501226425,
0.040424276143312454,
0.07287086546421051,
-0.034006... |
8e5619d8-b49c-490c-8084-757fc182b79a | If
CUSTOM 'protocol'
modifier is specified, the custom protocol with the specified name defined in the protocols section of the server configuration will be stopped.
If
QUERIES ALL [EXCEPT .. [,..]]
modifier is specified, all protocols are stopped, unless specified with
EXCEPT
clause.
If
QUERIES DEFAULT [EXCEPT .. [,..]]
modifier is specified, all default protocols are stopped, unless specified with
EXCEPT
clause.
If
QUERIES CUSTOM [EXCEPT .. [,..]]
modifier is specified, all custom protocols are stopped, unless specified with
EXCEPT
clause.
SYSTEM START LISTEN {#start-listen}
Allows new connections to be established on the specified protocols.
However, if the server on the specified port and protocol was not stopped using the SYSTEM STOP LISTEN command, this command will have no effect.
sql
SYSTEM START LISTEN [ON CLUSTER cluster_name] [QUERIES ALL | QUERIES DEFAULT | QUERIES CUSTOM | TCP | TCP WITH PROXY | TCP SECURE | HTTP | HTTPS | MYSQL | GRPC | POSTGRESQL | PROMETHEUS | CUSTOM 'protocol']
Managing MergeTree Tables {#managing-mergetree-tables}
ClickHouse can manage background processes in
MergeTree
tables.
SYSTEM STOP MERGES {#stop-merges}
Provides possibility to stop background merges for tables in the MergeTree family:
sql
SYSTEM STOP MERGES [ON CLUSTER cluster_name] [ON VOLUME <volume_name> | [db.]merge_tree_family_table_name]
:::note
DETACH / ATTACH
table will start background merges for the table even in case when merges have been stopped for all MergeTree tables before.
:::
SYSTEM START MERGES {#start-merges}
Provides possibility to start background merges for tables in the MergeTree family:
sql
SYSTEM START MERGES [ON CLUSTER cluster_name] [ON VOLUME <volume_name> | [db.]merge_tree_family_table_name]
SYSTEM STOP TTL MERGES {#stop-ttl-merges}
Provides possibility to stop background delete old data according to
TTL expression
for tables in the MergeTree family:
Returns
Ok.
even if table does not exist or table has not MergeTree engine. Returns error when database does not exist:
sql
SYSTEM STOP TTL MERGES [ON CLUSTER cluster_name] [[db.]merge_tree_family_table_name]
SYSTEM START TTL MERGES {#start-ttl-merges}
Provides possibility to start background delete old data according to
TTL expression
for tables in the MergeTree family:
Returns
Ok.
even if table does not exist. Returns error when database does not exist:
sql
SYSTEM START TTL MERGES [ON CLUSTER cluster_name] [[db.]merge_tree_family_table_name]
SYSTEM STOP MOVES {#stop-moves}
Provides possibility to stop background move data according to
TTL table expression with TO VOLUME or TO DISK clause
for tables in the MergeTree family:
Returns
Ok.
even if table does not exist. Returns error when database does not exist:
sql
SYSTEM STOP MOVES [ON CLUSTER cluster_name] [[db.]merge_tree_family_table_name]
SYSTEM START MOVES {#start-moves} | {"source_file": "system.md"} | [
0.03652067482471466,
-0.07325749099254608,
-0.06335359811782837,
-0.03511599451303482,
-0.03098716400563717,
-0.013917551375925541,
0.0633292943239212,
-0.05931289866566658,
-0.02899288572371006,
0.008956360630691051,
0.013442485593259335,
0.03304513171315193,
0.008989428170025349,
-0.0198... |
66b334bf-0d29-463c-8b3c-53196974bd22 | sql
SYSTEM STOP MOVES [ON CLUSTER cluster_name] [[db.]merge_tree_family_table_name]
SYSTEM START MOVES {#start-moves}
Provides possibility to start background move data according to
TTL table expression with TO VOLUME and TO DISK clause
for tables in the MergeTree family:
Returns
Ok.
even if table does not exist. Returns error when database does not exist:
sql
SYSTEM START MOVES [ON CLUSTER cluster_name] [[db.]merge_tree_family_table_name]
SYSTEM SYSTEM UNFREEZE {#query_language-system-unfreeze}
Clears a frozen backup with the specified name from all the disks. See more about unfreezing separate parts in
ALTER TABLE table_name UNFREEZE WITH NAME
sql
SYSTEM UNFREEZE WITH NAME <backup_name>
SYSTEM WAIT LOADING PARTS {#wait-loading-parts}
Wait until all asynchronously loading data parts of a table (outdated data parts) will became loaded.
sql
SYSTEM WAIT LOADING PARTS [ON CLUSTER cluster_name] [db.]merge_tree_family_table_name
Managing ReplicatedMergeTree Tables {#managing-replicatedmergetree-tables}
ClickHouse can manage background replication related processes in
ReplicatedMergeTree
tables.
SYSTEM STOP FETCHES {#stop-fetches}
Provides possibility to stop background fetches for inserted parts for tables in the
ReplicatedMergeTree
family:
Always returns
Ok.
regardless of the table engine and even if table or database does not exist.
sql
SYSTEM STOP FETCHES [ON CLUSTER cluster_name] [[db.]replicated_merge_tree_family_table_name]
SYSTEM START FETCHES {#start-fetches}
Provides possibility to start background fetches for inserted parts for tables in the
ReplicatedMergeTree
family:
Always returns
Ok.
regardless of the table engine and even if table or database does not exist.
sql
SYSTEM START FETCHES [ON CLUSTER cluster_name] [[db.]replicated_merge_tree_family_table_name]
SYSTEM STOP REPLICATED SENDS {#stop-replicated-sends}
Provides possibility to stop background sends to other replicas in cluster for new inserted parts for tables in the
ReplicatedMergeTree
family:
sql
SYSTEM STOP REPLICATED SENDS [ON CLUSTER cluster_name] [[db.]replicated_merge_tree_family_table_name]
SYSTEM START REPLICATED SENDS {#start-replicated-sends}
Provides possibility to start background sends to other replicas in cluster for new inserted parts for tables in the
ReplicatedMergeTree
family:
sql
SYSTEM START REPLICATED SENDS [ON CLUSTER cluster_name] [[db.]replicated_merge_tree_family_table_name]
SYSTEM STOP REPLICATION QUEUES {#stop-replication-queues}
Provides possibility to stop background fetch tasks from replication queues which stored in Zookeeper for tables in the
ReplicatedMergeTree
family. Possible background tasks types - merges, fetches, mutation, DDL statements with ON CLUSTER clause:
sql
SYSTEM STOP REPLICATION QUEUES [ON CLUSTER cluster_name] [[db.]replicated_merge_tree_family_table_name]
SYSTEM START REPLICATION QUEUES {#start-replication-queues} | {"source_file": "system.md"} | [
0.01447566319257021,
-0.08965438604354858,
-0.014766906388103962,
0.03627628833055496,
-0.013503333553671837,
-0.0699855387210846,
0.014704541303217411,
0.0016662220004945993,
0.022826530039310455,
0.03146391361951828,
0.059115082025527954,
-0.006773281842470169,
0.0701587051153183,
-0.056... |
7bda7785-0532-4239-a7fd-8cac82343cf3 | sql
SYSTEM STOP REPLICATION QUEUES [ON CLUSTER cluster_name] [[db.]replicated_merge_tree_family_table_name]
SYSTEM START REPLICATION QUEUES {#start-replication-queues}
Provides possibility to start background fetch tasks from replication queues which stored in Zookeeper for tables in the
ReplicatedMergeTree
family. Possible background tasks types - merges, fetches, mutation, DDL statements with ON CLUSTER clause:
sql
SYSTEM START REPLICATION QUEUES [ON CLUSTER cluster_name] [[db.]replicated_merge_tree_family_table_name]
SYSTEM STOP PULLING REPLICATION LOG {#stop-pulling-replication-log}
Stops loading new entries from replication log to replication queue in a
ReplicatedMergeTree
table.
sql
SYSTEM STOP PULLING REPLICATION LOG [ON CLUSTER cluster_name] [[db.]replicated_merge_tree_family_table_name]
SYSTEM START PULLING REPLICATION LOG {#start-pulling-replication-log}
Cancels
SYSTEM STOP PULLING REPLICATION LOG
.
sql
SYSTEM START PULLING REPLICATION LOG [ON CLUSTER cluster_name] [[db.]replicated_merge_tree_family_table_name]
SYSTEM SYNC REPLICA {#sync-replica}
Wait until a
ReplicatedMergeTree
table will be synced with other replicas in a cluster, but no more than
receive_timeout
seconds.
sql
SYSTEM SYNC REPLICA [ON CLUSTER cluster_name] [db.]replicated_merge_tree_family_table_name [IF EXISTS] [STRICT | LIGHTWEIGHT [FROM 'srcReplica1'[, 'srcReplica2'[, ...]]] | PULL]
After running this statement the
[db.]replicated_merge_tree_family_table_name
fetches commands from the common replicated log into its own replication queue, and then the query waits till the replica processes all of the fetched commands. The following modifiers are supported:
With
IF EXISTS
(available since 25.6) the query won't throw an error if the table does not exists. This is useful when adding a new replica to a cluster, when it's already part of the cluster configuration but it is still in the process of creating and synchronizing the table.
If a
STRICT
modifier was specified then the query waits for the replication queue to become empty. The
STRICT
version may never succeed if new entries constantly appear in the replication queue.
If a
LIGHTWEIGHT
modifier was specified then the query waits only for
GET_PART
,
ATTACH_PART
,
DROP_RANGE
,
REPLACE_RANGE
and
DROP_PART
entries to be processed.
Additionally, the LIGHTWEIGHT modifier supports an optional FROM 'srcReplicas' clause, where 'srcReplicas' is a comma-separated list of source replica names. This extension allows for more targeted synchronization by focusing only on replication tasks originating from the specified source replicas.
If a
PULL
modifier was specified then the query pulls new replication queue entries from ZooKeeper, but does not wait for anything to be processed.
SYNC DATABASE REPLICA {#sync-database-replica}
Waits until the specified
replicated database
applies all schema changes from the DDL queue of that database.
Syntax | {"source_file": "system.md"} | [
0.0007345683989115059,
-0.0945834293961525,
-0.04579753056168556,
0.02231021411716938,
0.027804726734757423,
-0.11353449523448944,
0.01262836717069149,
-0.07619783282279968,
0.01892160065472126,
0.09086073189973831,
0.03968638926744461,
-0.019794965162873268,
0.04427453130483627,
0.0072283... |
ecefb7d4-a53f-43ed-9cfb-b586a2395e5c | SYNC DATABASE REPLICA {#sync-database-replica}
Waits until the specified
replicated database
applies all schema changes from the DDL queue of that database.
Syntax
sql
SYSTEM SYNC DATABASE REPLICA replicated_database_name;
SYSTEM RESTART REPLICA {#restart-replica}
Provides possibility to reinitialize Zookeeper session's state for
ReplicatedMergeTree
table, will compare current state with Zookeeper as source of truth and add tasks to Zookeeper queue if needed.
Initialization of replication queue based on ZooKeeper data happens in the same way as for
ATTACH TABLE
statement. For a short time, the table will be unavailable for any operations.
sql
SYSTEM RESTART REPLICA [ON CLUSTER cluster_name] [db.]replicated_merge_tree_family_table_name
SYSTEM RESTORE REPLICA {#restore-replica}
Restores a replica if data is [possibly] present but Zookeeper metadata is lost.
Works only on readonly
ReplicatedMergeTree
tables.
One may execute query after:
ZooKeeper root
/
loss.
Replicas path
/replicas
loss.
Individual replica path
/replicas/replica_name/
loss.
Replica attaches locally found parts and sends info about them to Zookeeper.
Parts present on a replica before metadata loss are not re-fetched from other ones if not being outdated (so replica restoration does not mean re-downloading all data over the network).
:::note
Parts in all states are moved to
detached/
folder. Parts active before data loss (committed) are attached.
:::
SYSTEM RESTORE DATABASE REPLICA {#restore-database-replica}
Restores a replica if data is [possibly] present but Zookeeper metadata is lost.
Syntax
sql
SYSTEM RESTORE DATABASE REPLICA repl_db [ON CLUSTER cluster]
Example
```sql
CREATE DATABASE repl_db
ENGINE=Replicated("/clickhouse/repl_db", shard1, replica1);
CREATE TABLE repl_db.test_table (n UInt32)
ENGINE = ReplicatedMergeTree
ORDER BY n PARTITION BY n % 10;
-- zookeeper_delete_path("/clickhouse/repl_db", recursive=True) <- root loss.
SYSTEM RESTORE DATABASE REPLICA repl_db;
```
Syntax
sql
SYSTEM RESTORE REPLICA [db.]replicated_merge_tree_family_table_name [ON CLUSTER cluster_name]
Alternative syntax:
sql
SYSTEM RESTORE REPLICA [ON CLUSTER cluster_name] [db.]replicated_merge_tree_family_table_name
Example
Creating a table on multiple servers. After the replica's metadata in ZooKeeper is lost, the table will attach as read-only as metadata is missing. The last query needs to execute on every replica.
```sql
CREATE TABLE test(n UInt32)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/', '{replica}')
ORDER BY n PARTITION BY n % 10;
INSERT INTO test SELECT * FROM numbers(1000);
-- zookeeper_delete_path("/clickhouse/tables/test", recursive=True) <- root loss.
SYSTEM RESTART REPLICA test;
SYSTEM RESTORE REPLICA test;
```
Another way:
sql
SYSTEM RESTORE REPLICA test ON CLUSTER cluster;
SYSTEM RESTART REPLICAS {#restart-replicas} | {"source_file": "system.md"} | [
-0.019224556162953377,
-0.06821167469024658,
0.010410602204501629,
0.026706594973802567,
0.012937542051076889,
-0.07619233429431915,
-0.036843396723270416,
-0.05950522795319557,
0.0059001510962843895,
0.06941331923007965,
0.004184483550488949,
-0.01743238791823387,
0.0823785811662674,
0.01... |
8d6f116f-1ecd-4075-b8f9-6968651b9473 | SYSTEM RESTART REPLICA test;
SYSTEM RESTORE REPLICA test;
```
Another way:
sql
SYSTEM RESTORE REPLICA test ON CLUSTER cluster;
SYSTEM RESTART REPLICAS {#restart-replicas}
Provides possibility to reinitialize Zookeeper sessions state for all
ReplicatedMergeTree
tables, will compare current state with Zookeeper as source of true and add tasks to Zookeeper queue if needed
SYSTEM DROP FILESYSTEM CACHE {#drop-filesystem-cache}
Allows to drop filesystem cache.
sql
SYSTEM DROP FILESYSTEM CACHE [ON CLUSTER cluster_name]
SYSTEM SYNC FILE CACHE {#sync-file-cache}
:::note
It's too heavy and has potential for misuse.
:::
Will do sync syscall.
sql
SYSTEM SYNC FILE CACHE [ON CLUSTER cluster_name]
SYSTEM LOAD PRIMARY KEY {#load-primary-key}
Load the primary keys for the given table or for all tables.
sql
SYSTEM LOAD PRIMARY KEY [db.]name
sql
SYSTEM LOAD PRIMARY KEY
SYSTEM UNLOAD PRIMARY KEY {#unload-primary-key}
Unload the primary keys for the given table or for all tables.
sql
SYSTEM UNLOAD PRIMARY KEY [db.]name
sql
SYSTEM UNLOAD PRIMARY KEY
Managing Refreshable Materialized Views {#refreshable-materialized-views}
Commands to control background tasks performed by
Refreshable Materialized Views
Keep an eye on
system.view_refreshes
while using them.
SYSTEM REFRESH VIEW {#refresh-view}
Trigger an immediate out-of-schedule refresh of a given view.
sql
SYSTEM REFRESH VIEW [db.]name
SYSTEM WAIT VIEW {#wait-view}
Wait for the currently running refresh to complete. If the refresh fails, throws an exception. If no refresh is running, completes immediately, throwing an exception if previous refresh failed.
SYSTEM STOP [REPLICATED] VIEW, STOP VIEWS {#stop-view-stop-views}
Disable periodic refreshing of the given view or all refreshable views. If a refresh is in progress, cancel it too.
If the view is in a Replicated or Shared database,
STOP VIEW
only affects the current replica, while
STOP REPLICATED VIEW
affects all replicas.
sql
SYSTEM STOP VIEW [db.]name
sql
SYSTEM STOP VIEWS
SYSTEM START [REPLICATED] VIEW, START VIEWS {#start-view-start-views}
Enable periodic refreshing for the given view or all refreshable views. No immediate refresh is triggered.
If the view is in a Replicated or Shared database,
START VIEW
undoes the effect of
STOP VIEW
, and
START REPLICATED VIEW
undoes the effect of
STOP REPLICATED VIEW
.
sql
SYSTEM START VIEW [db.]name
sql
SYSTEM START VIEWS
SYSTEM CANCEL VIEW {#cancel-view}
If there's a refresh in progress for the given view on the current replica, interrupt and cancel it. Otherwise do nothing.
sql
SYSTEM CANCEL VIEW [db.]name
SYSTEM WAIT VIEW {#system-wait-view}
Waits for the running refresh to complete. If no refresh is running, returns immediately. If the latest refresh attempt failed, reports an error.
Can be used right after creating a new refreshable materialized view (without EMPTY keyword) to wait for the initial refresh to complete. | {"source_file": "system.md"} | [
-0.02693280577659607,
-0.05413704365491867,
-0.041241396218538284,
0.020342275500297546,
0.011016028001904488,
-0.061835430562496185,
0.0025877130683511496,
-0.028700819239020348,
-0.04901088401675224,
0.10516397655010223,
0.018582841381430626,
0.04134588688611984,
0.05447700619697571,
-0.... |
7ebf381c-c2f8-4bd6-97b7-951ad18c8619 | Can be used right after creating a new refreshable materialized view (without EMPTY keyword) to wait for the initial refresh to complete.
If the view is in a Replicated or Shared database, and refresh is running on another replica, waits for that refresh to complete.
sql
SYSTEM WAIT VIEW [db.]name | {"source_file": "system.md"} | [
-0.056834980845451355,
-0.09886039048433304,
-0.06723573058843613,
0.03944990411400795,
-0.03148069232702255,
0.012358495965600014,
0.00006796255911467597,
-0.0715327337384224,
0.034788213670253754,
0.041605010628700256,
0.03581686317920685,
-0.015447522513568401,
0.019882842898368835,
-0.... |
e4f959f0-f73c-482e-acee-22a5df44a062 | description: 'Documentation for Show'
sidebar_label: 'SHOW'
sidebar_position: 37
slug: /sql-reference/statements/show
title: 'SHOW Statements'
doc_type: 'reference'
:::note
SHOW CREATE (TABLE|DATABASE|USER)
hides secrets unless the following settings are turned on:
display_secrets_in_show_and_select
(server setting)
format_display_secrets_in_show_and_select
(format setting)
Additionally, the user should have the
displaySecretsInShowAndSelect
privilege.
:::
SHOW CREATE TABLE | DICTIONARY | VIEW | DATABASE {#show-create-table--dictionary--view--database}
These statements return a single column of type String,
containing the
CREATE
query used for creating the specified object.
Syntax {#syntax}
sql title="Syntax"
SHOW [CREATE] TABLE | TEMPORARY TABLE | DICTIONARY | VIEW | DATABASE [db.]table|view [INTO OUTFILE filename] [FORMAT format]
:::note
If you use this statement to get the
CREATE
query of system tables,
you will get a
fake
query, which only declares the table structure,
but cannot be used to create a table.
:::
SHOW DATABASES {#show-databases}
This statement prints a list of all databases.
Syntax {#syntax-1}
sql title="Syntax"
SHOW DATABASES [[NOT] LIKE | ILIKE '<pattern>'] [LIMIT <N>] [INTO OUTFILE filename] [FORMAT format]
It is identical to the query:
sql
SELECT name FROM system.databases [WHERE name [NOT] LIKE | ILIKE '<pattern>'] [LIMIT <N>] [INTO OUTFILE filename] [FORMAT format]
Examples {#examples}
In this example we use
SHOW
to obtain database names containing the symbol sequence 'de' in their names:
sql title="Query"
SHOW DATABASES LIKE '%de%'
text title="Response"
ββnameβββββ
β default β
βββββββββββ
We can also do so in a case-insensitive manner:
sql title="Query"
SHOW DATABASES ILIKE '%DE%'
text title="Response"
ββnameβββββ
β default β
βββββββββββ
Or get database names which do not contain 'de' in their names:
sql title="Query"
SHOW DATABASES NOT LIKE '%de%'
text title="Response"
ββnameββββββββββββββββββββββββββββ
β _temporary_and_external_tables β
β system β
β test β
β tutorial β
ββββββββββββββββββββββββββββββββββ
Finally, we can get the names of only the first two databases:
sql title="Query"
SHOW DATABASES LIMIT 2
text title="Response"
ββnameββββββββββββββββββββββββββββ
β _temporary_and_external_tables β
β default β
ββββββββββββββββββββββββββββββββββ
See also {#see-also}
CREATE DATABASE
SHOW TABLES {#show-tables}
The
SHOW TABLES
statement displays a list of tables.
Syntax {#syntax-2}
sql title="Syntax"
SHOW [FULL] [TEMPORARY] TABLES [{FROM | IN} <db>] [[NOT] LIKE | ILIKE '<pattern>'] [LIMIT <N>] [INTO OUTFILE <filename>] [FORMAT <format>]
If the
FROM
clause is not specified, the query returns a list of tables from the current database.
This statement is identical to the query: | {"source_file": "show.md"} | [
-0.026067858561873436,
-0.005450452212244272,
-0.12132467329502106,
0.08008518815040588,
-0.0968775525689125,
0.02734282799065113,
0.06183045729994774,
0.07977743446826935,
-0.04872693493962288,
0.02952638640999794,
0.020563453435897827,
-0.06862981617450714,
0.11384090781211853,
-0.056705... |
1ee36921-2c59-4c01-bbbf-1562ffc9fcb4 | If the
FROM
clause is not specified, the query returns a list of tables from the current database.
This statement is identical to the query:
sql
SELECT name FROM system.tables [WHERE name [NOT] LIKE | ILIKE '<pattern>'] [LIMIT <N>] [INTO OUTFILE <filename>] [FORMAT <format>]
Examples {#examples-1}
In this example we use the
SHOW TABLES
statement to find all tables containing 'user' in their names:
sql title="Query"
SHOW TABLES FROM system LIKE '%user%'
text title="Response"
ββnameββββββββββββββ
β user_directories β
β users β
ββββββββββββββββββββ
We can also do so in a case-insensitive manner:
sql title="Query"
SHOW TABLES FROM system ILIKE '%USER%'
text title="Response"
ββnameββββββββββββββ
β user_directories β
β users β
ββββββββββββββββββββ
Or to find tables which don't contain the letter 's' in their names:
sql title="Query"
SHOW TABLES FROM system NOT LIKE '%s%'
text title="Response"
ββnameββββββββββ
β metric_log β
β metric_log_0 β
β metric_log_1 β
ββββββββββββββββ
Finally, we can get the names of only the first two tables:
sql title="Query"
SHOW TABLES FROM system LIMIT 2
text title="Response"
ββnameββββββββββββββββββββββββββββ
β aggregate_function_combinators β
β asynchronous_metric_log β
ββββββββββββββββββββββββββββββββββ
See also {#see-also-1}
Create Tables
SHOW CREATE TABLE
SHOW COLUMNS {#show_columns}
The
SHOW COLUMNS
statement displays a list of columns.
Syntax {#syntax-3}
sql title="Syntax"
SHOW [EXTENDED] [FULL] COLUMNS {FROM | IN} <table> [{FROM | IN} <db>] [{[NOT] {LIKE | ILIKE} '<pattern>' | WHERE <expr>}] [LIMIT <N>] [INTO
OUTFILE <filename>] [FORMAT <format>]
The database and table name can be specified in abbreviated form as
<db>.<table>
,
meaning that
FROM tab FROM db
and
FROM db.tab
are equivalent.
If no database is specified, the query returns the list of columns from the current database.
There are also two optional keywords:
EXTENDED
and
FULL
. The
EXTENDED
keyword currently has no effect,
and exists for MySQL compatibility. The
FULL
keyword causes the output to include the collation, comment and privilege columns.
The
SHOW COLUMNS
statement produces a result table with the following structure: | {"source_file": "show.md"} | [
0.005839201156049967,
0.008413502015173435,
0.01753753051161766,
0.06518620252609253,
-0.013044893741607666,
-0.042452916502952576,
0.13397245109081268,
0.06173508241772652,
-0.06475350260734558,
-0.01813347637653351,
0.05391056835651398,
-0.017768360674381256,
0.15945059061050415,
-0.0745... |
0fcb8305-f1a0-4e03-afd2-2ce1e421e780 | The
SHOW COLUMNS
statement produces a result table with the following structure:
| Column | Description | Type |
|-------------|-------------------------------------------------------------------------------------------------------------------------------|--------------------|
|
field
| The name of the column |
String
|
|
type
| The column data type. If the query was made through the MySQL wire protocol, then the equivalent type name in MySQL is shown. |
String
|
|
null
|
YES
if the column data type is Nullable,
NO
otherwise |
String
|
|
key
|
PRI
if the column is part of the primary key,
SOR
if the column is part of the sorting key, empty otherwise |
String
|
|
default
| Default expression of the column if it is of type
ALIAS
,
DEFAULT
, or
MATERIALIZED
, otherwise
NULL
. |
Nullable(String)
|
|
extra
| Additional information, currently unused |
String
|
|
collation
| (only if
FULL
keyword was specified) Collation of the column, always
NULL
because ClickHouse has no per-column collations |
Nullable(String)
|
|
comment
| (only if
FULL
keyword was specified) Comment on the column |
String
|
|
privilege
| (only if
FULL
keyword was specified) The privilege you have on this column, currently not available |
String
|
Examples {#examples-2}
In this example we'll use the
SHOW COLUMNS
statement to get information about all columns in table 'orders',
starting from 'delivery_':
sql title="Query"
SHOW COLUMNS FROM 'orders' LIKE 'delivery_%'
text title="Response"
ββfieldββββββββββββ¬βtypeββββββ¬βnullββ¬βkeyββββββ¬βdefaultββ¬βextraββ
β delivery_date β DateTime β 0 β PRI SOR β α΄Ία΅α΄Έα΄Έ β β
β delivery_status β Bool β 0 β β α΄Ία΅α΄Έα΄Έ β β
βββββββββββββββββββ΄βββββββββββ΄βββββββ΄ββββββββββ΄ββββββββββ΄ββββββββ
See also {#see-also-2}
system.columns
SHOW DICTIONARIES {#show-dictionaries}
The
SHOW DICTIONARIES
statement displays a list of
Dictionaries
.
Syntax {#syntax-4}
sql title="Syntax"
SHOW DICTIONARIES [FROM <db>] [LIKE '<pattern>'] [LIMIT <N>] [INTO OUTFILE <filename>] [FORMAT <format>]
If the
FROM
clause is not specified, the query returns the list of dictionaries from the current database.
You can get the same results as the
SHOW DICTIONARIES
query in the following way: | {"source_file": "show.md"} | [
-0.00523718586191535,
0.05113906040787697,
-0.08320138603448868,
0.037664737552404404,
-0.06256494671106339,
-0.025781655684113503,
0.04169537127017975,
0.02416839450597763,
-0.02268100716173649,
-0.02363031730055809,
0.03584519028663635,
-0.06635473668575287,
0.11909182369709015,
-0.04918... |
51052c9b-4bf1-4feb-92f2-55c502d9e6db | If the
FROM
clause is not specified, the query returns the list of dictionaries from the current database.
You can get the same results as the
SHOW DICTIONARIES
query in the following way:
sql
SELECT name FROM system.dictionaries WHERE database = <db> [AND name LIKE <pattern>] [LIMIT <N>] [INTO OUTFILE <filename>] [FORMAT <format>]
Examples {#examples-3}
The following query selects the first two rows from the list of tables in the
system
database, whose names contain
reg
.
sql title="Query"
SHOW DICTIONARIES FROM db LIKE '%reg%' LIMIT 2
text title="Response"
ββnameββββββββββ
β regions β
β region_names β
ββββββββββββββββ
SHOW INDEX {#show-index}
Displays a list of primary and data skipping indexes of a table.
This statement mostly exists for compatibility with MySQL. System tables
system.tables
(for
primary keys) and
system.data_skipping_indices
(for data skipping indices)
provide equivalent information but in a fashion more native to ClickHouse.
Syntax {#syntax-5}
sql title="Syntax"
SHOW [EXTENDED] {INDEX | INDEXES | INDICES | KEYS } {FROM | IN} <table> [{FROM | IN} <db>] [WHERE <expr>] [INTO OUTFILE <filename>] [FORMAT <format>]
The database and table name can be specified in abbreviated form as
<db>.<table>
, i.e.
FROM tab FROM db
and
FROM db.tab
are
equivalent. If no database is specified, the query assumes the current database as database.
The optional keyword
EXTENDED
currently has no effect, and exists for MySQL compatibility.
The statement produces a result table with the following structure: | {"source_file": "show.md"} | [
0.029966209083795547,
0.02339552529156208,
-0.023550648242235184,
0.05390942096710205,
0.015365096740424633,
-0.05440814793109894,
0.11137410998344421,
0.020669298246502876,
-0.06616043299436569,
-0.005569688510149717,
0.023821817710995674,
0.016879862174391747,
0.13415317237377167,
-0.124... |
ba0bc824-882e-4d3c-89ba-7addd1093459 | The optional keyword
EXTENDED
currently has no effect, and exists for MySQL compatibility.
The statement produces a result table with the following structure:
| Column | Description | Type |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|--------------------|
|
table
| The name of the table. |
String
|
|
non_unique
| Always
1
as ClickHouse does not support uniqueness constraints. |
UInt8
|
|
key_name
| The name of the index,
PRIMARY
if the index is a primary key index. |
String
|
|
seq_in_index
| For a primary key index, the position of the column starting from
1
. For a data skipping index: always
1
. |
UInt8
|
|
column_name
| For a primary key index, the name of the column. For a data skipping index:
''
(empty string), see field "expression". |
String
|
|
collation
| The sorting of the column in the index:
A
if ascending,
D
if descending,
NULL
if unsorted. |
Nullable(String)
|
|
cardinality
| An estimation of the index cardinality (number of unique values in the index). Currently always 0. |
UInt64
|
|
sub_part
| Always
NULL
because ClickHouse does not support index prefixes like MySQL. |
Nullable(String)
|
|
packed
| Always
NULL
because ClickHouse does not support packed indexes (like MySQL). |
Nullable(String)
|
|
null
| Currently unused | |
|
index_type
| The index type, e.g.
PRIMARY
,
MINMAX
,
BLOOM_FILTER
etc. |
String
|
|
comment
| Additional information about the index, currently always
''
(empty string). |
String
|
|
index_comment
|
''
(empty string) because indexes in ClickHouse cannot have a
COMMENT
field (like in MySQL). |
String
|
|
visible
| If the index is visible to the optimizer, always
YES
. |
String
|
|
expression
| For a data skipping index, the index expression. For a primary key index:
''
(empty string). |
String
|
Examples {#examples-4} | {"source_file": "show.md"} | [
-0.034025486558675766,
0.023421619087457657,
-0.04385829344391823,
0.00911103654652834,
-0.04150088503956795,
-0.0036220818292349577,
0.11003850400447845,
-0.04276791960000992,
-0.026471305638551712,
-0.04988231882452965,
0.06640171259641647,
-0.057600442320108414,
0.15256834030151367,
-0.... |
4e0524a4-1949-4eee-a8bb-6d97c07e8004 | Examples {#examples-4}
In this example we use the
SHOW INDEX
statement to get information about all indexes in table 'tbl'
sql title="Query"
SHOW INDEX FROM 'tbl'
text title="Response"
ββtableββ¬βnon_uniqueββ¬βkey_nameββ¬βseq_in_indexββ¬βcolumn_nameββ¬βcollationββ¬βcardinalityββ¬βsub_partββ¬βpackedββ¬βnullββ¬βindex_typeββββ¬βcommentββ¬βindex_commentββ¬βvisibleββ¬βexpressionββ
β tbl β 1 β blf_idx β 1 β 1 β α΄Ία΅α΄Έα΄Έ β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β BLOOM_FILTER β β β YES β d, b β
β tbl β 1 β mm1_idx β 1 β 1 β α΄Ία΅α΄Έα΄Έ β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β MINMAX β β β YES β a, c, d β
β tbl β 1 β mm2_idx β 1 β 1 β α΄Ία΅α΄Έα΄Έ β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β MINMAX β β β YES β c, d, e β
β tbl β 1 β PRIMARY β 1 β c β A β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β PRIMARY β β β YES β β
β tbl β 1 β PRIMARY β 2 β a β A β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β PRIMARY β β β YES β β
β tbl β 1 β set_idx β 1 β 1 β α΄Ία΅α΄Έα΄Έ β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β SET β β β YES β e β
βββββββββ΄βββββββββββββ΄βββββββββββ΄βββββββββββββββ΄ββββββββββββββ΄ββββββββββββ΄ββββββββββββββ΄βββββββββββ΄βββββββββ΄βββββββ΄βββββββββββββββ΄ββββββββββ΄ββββββββββββββββ΄ββββββββββ΄βββββββββββββ
See also {#see-also-3}
system.tables
system.data_skipping_indices
SHOW PROCESSLIST {#show-processlist}
Outputs the content of the
system.processes
table, that contains a list of queries that are being processed at the moment, excluding
SHOW PROCESSLIST
queries.
Syntax {#syntax-6}
sql title="Syntax"
SHOW PROCESSLIST [INTO OUTFILE filename] [FORMAT format]
The
SELECT * FROM system.processes
query returns data about all the current queries.
:::tip
Execute in the console:
bash
$ watch -n1 "clickhouse-client --query='SHOW PROCESSLIST'"
:::
SHOW GRANTS {#show-grants}
The
SHOW GRANTS
statement shows privileges for a user.
Syntax {#syntax-7}
sql title="Syntax"
SHOW GRANTS [FOR user1 [, user2 ...]] [WITH IMPLICIT] [FINAL]
If the user is not specified, the query returns privileges for the current user.
The
WITH IMPLICIT
modifier allows showing the implicit grants (e.g.,
GRANT SELECT ON system.one
)
The
FINAL
modifier merges all grants from the user and its granted roles (with inheritance)
SHOW CREATE USER {#show-create-user}
The
SHOW CREATE USER
statement shows parameters which were used at
user creation
.
Syntax {#syntax-8}
sql title="Syntax"
SHOW CREATE USER [name1 [, name2 ...] | CURRENT_USER]
SHOW CREATE ROLE {#show-create-role} | {"source_file": "show.md"} | [
0.06496529281139374,
0.004323566798120737,
0.01982767879962921,
0.08611711114645004,
0.006152184680104256,
0.026596184819936752,
0.05673668906092644,
-0.02572113648056984,
-0.011657992377877235,
0.03746625781059265,
0.004690271802246571,
-0.04427013546228409,
0.07661938667297363,
-0.093043... |
c01576f0-3d68-4b30-b30b-7757b6891a91 | Syntax {#syntax-8}
sql title="Syntax"
SHOW CREATE USER [name1 [, name2 ...] | CURRENT_USER]
SHOW CREATE ROLE {#show-create-role}
The
SHOW CREATE ROLE
statement shows parameters which were used at
role creation
.
Syntax {#syntax-9}
sql title="Syntax"
SHOW CREATE ROLE name1 [, name2 ...]
SHOW CREATE ROW POLICY {#show-create-row-policy}
The
SHOW CREATE ROW POLICY
statement shows parameters which were used at
row policy creation
.
Syntax {#syntax-10}
sql title="Syntax"
SHOW CREATE [ROW] POLICY name ON [database1.]table1 [, [database2.]table2 ...]
SHOW CREATE QUOTA {#show-create-quota}
The
SHOW CREATE QUOTA
statement shows parameters which were used at
quota creation
.
Syntax {#syntax-11}
sql title="Syntax"
SHOW CREATE QUOTA [name1 [, name2 ...] | CURRENT]
SHOW CREATE SETTINGS PROFILE {#show-create-settings-profile}
The
SHOW CREATE SETTINGS PROFILE
statement shows parameters which were used at
settings profile creation
.
Syntax {#syntax-12}
sql title="Syntax"
SHOW CREATE [SETTINGS] PROFILE name1 [, name2 ...]
SHOW USERS {#show-users}
The
SHOW USERS
statement returns a list of
user account
names.
To view user accounts parameters, see the system table
system.users
.
Syntax {#syntax-13}
sql title="Syntax"
SHOW USERS
SHOW ROLES {#show-roles}
The
SHOW ROLES
statement returns a list of
roles
.
To view other parameters,
see system tables
system.roles
and
system.role_grants
.
Syntax {#syntax-14}
sql title="Syntax"
SHOW [CURRENT|ENABLED] ROLES
SHOW PROFILES {#show-profiles}
The
SHOW PROFILES
statement returns a list of
setting profiles
.
To view user accounts parameters, see system table
settings_profiles
.
Syntax {#syntax-15}
sql title="Syntax"
SHOW [SETTINGS] PROFILES
SHOW POLICIES {#show-policies}
The
SHOW POLICIES
statement returns a list of
row policies
for the specified table.
To view user accounts parameters, see system table
system.row_policies
.
Syntax {#syntax-16}
sql title="Syntax"
SHOW [ROW] POLICIES [ON [db.]table]
SHOW QUOTAS {#show-quotas}
The
SHOW QUOTAS
statement returns a list of
quotas
.
To view quotas parameters, see the system table
system.quotas
.
Syntax {#syntax-17}
sql title="Syntax"
SHOW QUOTAS
SHOW QUOTA {#show-quota}
The
SHOW QUOTA
statement returns a
quota
consumption for all users or for current user.
To view other parameters, see system tables
system.quotas_usage
and
system.quota_usage
.
Syntax {#syntax-18}
sql title="Syntax"
SHOW [CURRENT] QUOTA
SHOW ACCESS {#show-access}
The
SHOW ACCESS
statement shows all
users
,
roles
,
profiles
, etc. and all their
grants
.
Syntax {#syntax-19}
sql title="Syntax"
SHOW ACCESS
SHOW CLUSTER(S) {#show-clusters}
The
SHOW CLUSTER(S)
statement returns a list of clusters.
All available clusters are listed in the
system.clusters
table. | {"source_file": "show.md"} | [
0.02789061889052391,
-0.027308568358421326,
-0.06269876658916473,
0.05433811992406845,
-0.08372457325458527,
0.052709583193063736,
0.09485204517841339,
0.05884735658764839,
-0.14409559965133667,
0.00062949163839221,
-0.009499944746494293,
-0.09496099501848221,
0.1433909386396408,
-0.030885... |
ce06e357-d0f9-416e-90a1-7c0e63da947c | sql title="Syntax"
SHOW ACCESS
SHOW CLUSTER(S) {#show-clusters}
The
SHOW CLUSTER(S)
statement returns a list of clusters.
All available clusters are listed in the
system.clusters
table.
:::note
The
SHOW CLUSTER name
query displays
cluster
,
shard_num
,
replica_num
,
host_name
,
host_address
, and
port
of the
system.clusters
table for the specified cluster name.
:::
Syntax {#syntax-20}
sql title="Syntax"
SHOW CLUSTER '<name>'
SHOW CLUSTERS [[NOT] LIKE|ILIKE '<pattern>'] [LIMIT <N>]
Examples {#examples-5}
sql title="Query"
SHOW CLUSTERS;
text title="Response"
ββclusterβββββββββββββββββββββββββββββββββββββββ
β test_cluster_two_shards β
β test_cluster_two_shards_internal_replication β
β test_cluster_two_shards_localhost β
β test_shard_localhost β
β test_shard_localhost_secure β
β test_unavailable_shard β
ββββββββββββββββββββββββββββββββββββββββββββββββ
sql title="Query"
SHOW CLUSTERS LIKE 'test%' LIMIT 1;
text title="Response"
ββclusterββββββββββββββββββ
β test_cluster_two_shards β
βββββββββββββββββββββββββββ
sql title="Query"
SHOW CLUSTER 'test_shard_localhost' FORMAT Vertical;
text title="Response"
Row 1:
ββββββ
cluster: test_shard_localhost
shard_num: 1
replica_num: 1
host_name: localhost
host_address: 127.0.0.1
port: 9000
SHOW SETTINGS {#show-settings}
The
SHOW SETTINGS
statement returns a list of system settings and their values.
It selects data from the
system.settings
table.
Syntax {#syntax-21}
sql title="Syntax"
SHOW [CHANGED] SETTINGS LIKE|ILIKE <name>
Clauses {#clauses}
LIKE|ILIKE
allow to specify a matching pattern for the setting name. It can contain globs such as
%
or
_
.
LIKE
clause is case-sensitive,
ILIKE
β case insensitive.
When the
CHANGED
clause is used, the query returns only settings changed from their default values.
Examples {#examples-6}
Query with the
LIKE
clause:
sql title="Query"
SHOW SETTINGS LIKE 'send_timeout';
text title="Response"
ββnameββββββββββ¬βtypeβββββ¬βvalueββ
β send_timeout β Seconds β 300 β
ββββββββββββββββ΄ββββββββββ΄ββββββββ
Query with the
ILIKE
clause:
sql title="Query"
SHOW SETTINGS ILIKE '%CONNECT_timeout%'
text title="Response"
ββnameβββββββββββββββββββββββββββββββββββββ¬βtypeββββββββββ¬βvalueββ
β connect_timeout β Seconds β 10 β
β connect_timeout_with_failover_ms β Milliseconds β 50 β
β connect_timeout_with_failover_secure_ms β Milliseconds β 100 β
βββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄ββββββββ
Query with the
CHANGED
clause:
sql title="Query"
SHOW CHANGED SETTINGS ILIKE '%MEMORY%'
text title="Response"
ββnameββββββββββββββ¬βtypeββββ¬βvalueββββββββ
β max_memory_usage β UInt64 β 10000000000 β
ββββββββββββββββββββ΄βββββββββ΄ββββββββββββββ
SHOW SETTING {#show-setting} | {"source_file": "show.md"} | [
0.0747913047671318,
-0.03710702061653137,
-0.03393649309873581,
0.07344501465559006,
0.02967187389731407,
0.003291097469627857,
-0.004176150541752577,
-0.03616905212402344,
-0.04244063049554825,
0.007296364288777113,
0.03831186890602112,
-0.07052861899137497,
0.12331676483154297,
-0.052251... |
ef3bc4c9-5b60-4a54-86ef-dd397f0dcf71 | text title="Response"
ββnameββββββββββββββ¬βtypeββββ¬βvalueββββββββ
β max_memory_usage β UInt64 β 10000000000 β
ββββββββββββββββββββ΄βββββββββ΄ββββββββββββββ
SHOW SETTING {#show-setting}
The
SHOW SETTING
statement outputs setting value for specified setting name.
Syntax {#syntax-22}
sql title="Syntax"
SHOW SETTING <name>
See also {#see-also-4}
system.settings
table
SHOW FILESYSTEM CACHES {#show-filesystem-caches}
Examples {#examples-7}
sql title="Query"
SHOW FILESYSTEM CACHES
text title="Response"
ββCachesβββββ
β s3_cache β
βββββββββββββ
See also {#see-also-5}
system.settings
table
SHOW ENGINES {#show-engines}
The
SHOW ENGINES
statement outputs the content of the
system.table_engines
table,
that contains description of table engines supported by server and their feature support information.
Syntax {#syntax-23}
sql title="Syntax"
SHOW ENGINES [INTO OUTFILE filename] [FORMAT format]
See also {#see-also-6}
system.table_engines
table
SHOW FUNCTIONS {#show-functions}
The
SHOW FUNCTIONS
statement outputs the content of the
system.functions
table.
Syntax {#syntax-24}
sql title="Syntax"
SHOW FUNCTIONS [LIKE | ILIKE '<pattern>']
If either
LIKE
or
ILIKE
clause is specified, the query returns a list of system functions whose names match the provided
<pattern>
.
See Also {#see-also-7}
system.functions
table
SHOW MERGES {#show-merges}
The
SHOW MERGES
statement returns a list of merges.
All merges are listed in the
system.merges
table:
| Column | Description |
|---------------------|------------------------------------------------------------|
|
table
| Table name. |
|
database
| The name of the database the table is in. |
|
estimate_complete
| The estimated time to complete (in seconds). |
|
elapsed
| The time elapsed (in seconds) since the merge started. |
|
progress
| The percentage of completed work (0-100 percent). |
|
is_mutation
| 1 if this process is a part mutation. |
|
size_compressed
| The total size of the compressed data of the merged parts. |
|
memory_usage
| Memory consumption of the merge process. |
Syntax {#syntax-25}
sql title="Syntax"
SHOW MERGES [[NOT] LIKE|ILIKE '<table_name_pattern>'] [LIMIT <N>]
Examples {#examples-8}
sql title="Query"
SHOW MERGES;
text title="Response"
ββtableβββββββ¬βdatabaseββ¬βestimate_completeββ¬βelapsedββ¬βprogressββ¬βis_mutationββ¬βsize_compressedββ¬βmemory_usageββ
β your_table β default β 0.14 β 0.36 β 73.01 β 0 β 5.40 MiB β 10.25 MiB β
ββββββββββββββ΄βββββββββββ΄ββββββββββββββββββββ΄ββββββββββ΄βββββββββββ΄ββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββ
sql title="Query"
SHOW MERGES LIKE 'your_t%' LIMIT 1; | {"source_file": "show.md"} | [
0.02416970394551754,
0.00026662845630198717,
-0.07940568774938583,
0.058510683476924896,
-0.030690522864460945,
0.0001369450765196234,
0.047264307737350464,
0.08835936337709427,
-0.09614483267068863,
0.017359351739287376,
-0.022060733288526535,
-0.03332754224538803,
0.0677790492773056,
-0.... |
8e3b99e1-8717-423a-af2e-0b70b3b67a1e | sql title="Query"
SHOW MERGES LIKE 'your_t%' LIMIT 1;
text title="Response"
ββtableβββββββ¬βdatabaseββ¬βestimate_completeββ¬βelapsedββ¬βprogressββ¬βis_mutationββ¬βsize_compressedββ¬βmemory_usageββ
β your_table β default β 0.14 β 0.36 β 73.01 β 0 β 5.40 MiB β 10.25 MiB β
ββββββββββββββ΄βββββββββββ΄ββββββββββββββββββββ΄ββββββββββ΄βββββββββββ΄ββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββ | {"source_file": "show.md"} | [
0.023206841200590134,
-0.008321214467287064,
0.017810463905334473,
0.059521421790122986,
-0.07510564476251602,
-0.02039625309407711,
0.08436919748783112,
0.0866900086402893,
0.0031480847392231226,
-0.033128004521131516,
0.07849745452404022,
-0.05112665891647339,
0.05334606394171715,
-0.058... |
8f20ae03-d451-404a-8f52-c02ee4e25383 | description: 'Documentation for TRUNCATE Statements'
sidebar_label: 'TRUNCATE'
sidebar_position: 52
slug: /sql-reference/statements/truncate
title: 'TRUNCATE Statements'
doc_type: 'reference'
TRUNCATE Statements
The
TRUNCATE
statement in ClickHouse is used to quickly remove all data from a table or database while preserving their structure.
TRUNCATE TABLE {#truncate-table}
sql
TRUNCATE TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster] [SYNC]
| Parameter | Description |
|---------------------|---------------------------------------------------------------------------------------------------|
|
IF EXISTS
| Prevents an error if the table does not exist. If omitted, the query returns an error. |
|
db.name
| Optional database name. |
|
ON CLUSTER cluster
| Runs the command across a specified cluster. |
|
SYNC
| Makes the truncation synchronous across replicas when using replicated tables. If omitted, truncation happens asynchronously by default. |
You can use the
alter_sync
setting to set up waiting for actions to be executed on replicas.
You can specify how long (in seconds) to wait for inactive replicas to execute
TRUNCATE
queries with the
replication_wait_for_inactive_replica_timeout
setting.
:::note
If the
alter_sync
is set to
2
and some replicas are not active for more than the time, specified by the
replication_wait_for_inactive_replica_timeout
setting, then an exception
UNFINISHED
is thrown.
:::
The
TRUNCATE TABLE
query is
not supported
for the following table engines:
View
File
URL
Buffer
Null
TRUNCATE ALL TABLES {#truncate-all-tables}
sql
TRUNCATE [ALL] TABLES FROM [IF EXISTS] db [LIKE | ILIKE | NOT LIKE '<pattern>'] [ON CLUSTER cluster]
| Parameter | Description |
|----------------------------|---------------------------------------------------|
|
ALL
| Removes data from all tables in the database. |
|
IF EXISTS
| Prevents an error if the database does not exist. |
|
db
| The database name. |
|
LIKE \| ILIKE \| NOT LIKE '<pattern>'
| Filters tables by pattern. |
|
ON CLUSTER cluster
| Runs the command across a cluster. |
Removes all data from all tables in a database.
TRUNCATE DATABASE {#truncate-database}
sql
TRUNCATE DATABASE [IF EXISTS] db [ON CLUSTER cluster] | {"source_file": "truncate.md"} | [
0.040567971765995026,
0.06380026787519455,
0.015252448618412018,
0.06936711817979813,
-0.0023866682313382626,
-0.0374671146273613,
0.07721543312072754,
-0.04924500361084938,
-0.007272674702107906,
0.014943747781217098,
0.057027287781238556,
0.023368846625089645,
0.045516807585954666,
-0.12... |
0571b5f1-de35-4bb5-b030-6c8f517718f5 | Removes all data from all tables in a database.
TRUNCATE DATABASE {#truncate-database}
sql
TRUNCATE DATABASE [IF EXISTS] db [ON CLUSTER cluster]
| Parameter | Description |
|----------------------|---------------------------------------------------|
|
IF EXISTS
| Prevents an error if the database does not exist. |
|
db
| The database name. |
|
ON CLUSTER cluster
| Runs the command across a specified cluster. |
Removes all tables from a database but keeps the database itself. When the clause
IF EXISTS
is omitted, the query returns an error if the database does not exist.
:::note
TRUNCATE DATABASE
is not supported for
Replicated
databases. Instead, just
DROP
and
CREATE
the database.
::: | {"source_file": "truncate.md"} | [
0.05612395703792572,
-0.023410281166434288,
-0.013390526175498962,
0.07991541922092438,
-0.013644973747432232,
-0.07506376504898071,
0.07902128994464874,
-0.04664928466081619,
0.0038722555618733168,
0.02841363102197647,
0.09754370152950287,
0.028963850811123848,
0.09204030781984329,
-0.094... |
cdda3378-348e-4952-96ed-205246fd8f12 | description: 'Documentation for EXCHANGE Statement'
sidebar_label: 'EXCHANGE'
sidebar_position: 49
slug: /sql-reference/statements/exchange
title: 'EXCHANGE Statement'
doc_type: 'reference'
EXCHANGE Statement
Exchanges the names of two tables or dictionaries atomically.
This task can also be accomplished with a
RENAME
query using a temporary name, but the operation is not atomic in that case.
:::note
The
EXCHANGE
query is supported by the
Atomic
and
Shared
database engines only.
:::
Syntax
sql
EXCHANGE TABLES|DICTIONARIES [db0.]name_A AND [db1.]name_B [ON CLUSTER cluster]
EXCHANGE TABLES {#exchange-tables}
Exchanges the names of two tables.
Syntax
sql
EXCHANGE TABLES [db0.]table_A AND [db1.]table_B [ON CLUSTER cluster]
EXCHANGE DICTIONARIES {#exchange-dictionaries}
Exchanges the names of two dictionaries.
Syntax
sql
EXCHANGE DICTIONARIES [db0.]dict_A AND [db1.]dict_B [ON CLUSTER cluster]
See Also
Dictionaries | {"source_file": "exchange.md"} | [
0.016539279371500015,
-0.012530192732810974,
0.005004459526389837,
0.05089081823825836,
-0.05499497801065445,
-0.08028165996074677,
0.10080728679895401,
-0.02358327992260456,
-0.0040387301705777645,
0.026312779635190964,
0.06438622623682022,
-0.05204825475811958,
0.030610527843236923,
-0.0... |
c1078bab-1431-4c04-a279-b50a41fda77f | description: 'Documentation for Detach'
sidebar_label: 'DETACH'
sidebar_position: 43
slug: /sql-reference/statements/detach
title: 'DETACH Statement'
doc_type: 'reference'
Makes the server "forget" about the existence of a table, a materialized view, a dictionary, or a database.
Syntax
sql
DETACH TABLE|VIEW|DICTIONARY|DATABASE [IF EXISTS] [db.]name [ON CLUSTER cluster] [PERMANENTLY] [SYNC]
Detaching does not delete the data or metadata of a table, a materialized view, a dictionary or a database. If an entity was not detached
PERMANENTLY
, on the next server launch the server will read the metadata and recall the table/view/dictionary/database again. If an entity was detached
PERMANENTLY
, there will be no automatic recall.
Whether a table, a dictionary or a database was detached permanently or not, in both cases you can reattach them using the
ATTACH
query.
System log tables can be also attached back (e.g.
query_log
,
text_log
, etc.). Other system tables can't be reattached. On the next server launch the server will recall those tables again.
ATTACH MATERIALIZED VIEW
does not work with short syntax (without
SELECT
), but you can attach it using the
ATTACH TABLE
query.
Note that you can not detach permanently the table which is already detached (temporary). But you can attach it back and then detach permanently again.
Also, you can not
DROP
the detached table, or
CREATE TABLE
with the same name as detached permanently, or replace it with the other table with
RENAME TABLE
query.
The
SYNC
modifier executes the action without delay.
Example
Creating a table:
Query:
sql
CREATE TABLE test ENGINE = Log AS SELECT * FROM numbers(10);
SELECT * FROM test;
Result:
text
ββnumberββ
β 0 β
β 1 β
β 2 β
β 3 β
β 4 β
β 5 β
β 6 β
β 7 β
β 8 β
β 9 β
ββββββββββ
Detaching the table:
Query:
sql
DETACH TABLE test;
SELECT * FROM test;
Result:
text
Received exception from server (version 21.4.1):
Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.test does not exist.
:::note
In ClickHouse Cloud users should use the
PERMANENTLY
clause e.g.
DETACH TABLE <table> PERMANENTLY
. If this clause is not used, tables will be reattached on cluster restart e.g. during upgrades.
:::
See Also
Materialized View
Dictionaries | {"source_file": "detach.md"} | [
0.0069803353399038315,
-0.07884971797466278,
-0.02453182265162468,
0.09791646152734756,
0.05156925693154335,
-0.01646801084280014,
0.0178056750446558,
-0.07013550400733948,
0.05061778426170349,
-0.0013488178374245763,
0.11421317607164383,
0.09583988040685654,
0.07643669098615646,
-0.057388... |
cff33d1a-a2fd-46c2-95e0-de42c11d64be | description: 'Documentation for PARALLEL WITH Clause'
sidebar_label: 'PARALLEL WITH'
sidebar_position: 53
slug: /sql-reference/statements/parallel_with
title: 'PARALLEL WITH Clause'
doc_type: 'reference'
PARALLEL WITH Clause
Allows to execute multiple statements in parallel.
Syntax {#syntax}
sql
statement1 PARALLEL WITH statement2 [PARALLEL WITH statement3 ...]
Executes statements
statement1
,
statement2
,
statement3
, ... in parallel with each other. The output of those statements is discarded.
Executing statements in parallel may be faster than just a sequence of the same statements in many cases. For example,
statement1 PARALLEL WITH statement2 PARALLEL WITH statement3
is likely to be faster than
statement1; statement2; statement3
.
Examples {#examples}
Creates two tables in parallel:
sql
CREATE TABLE table1(x Int32) ENGINE = MergeTree ORDER BY tuple()
PARALLEL WITH
CREATE TABLE table2(y String) ENGINE = MergeTree ORDER BY tuple();
Drops two tables in parallel:
sql
DROP TABLE table1
PARALLEL WITH
DROP TABLE table2;
Settings {#settings}
Setting
max_threads
controls how many threads are spawned.
Comparison with UNION {#comparison-with-union}
The
PARALLEL WITH
clause is a bit similar to
UNION
, which also executes its operands in parallel. However there are some differences:
-
PARALLEL WITH
doesn't return any results from executing its operands, it can only rethrow an exception from them if any;
-
PARALLEL WITH
doesn't require its operands to have the same set of result columns;
-
PARALLEL WITH
can execute any statements (not just
SELECT
). | {"source_file": "parallel_with.md"} | [
-0.030657656490802765,
-0.007744107395410538,
-0.021747861057519913,
-0.00007207507587736472,
-0.05475705862045288,
-0.023188680410385132,
-0.04991396516561508,
0.03486863523721695,
-0.022587141022086143,
0.034394778311252594,
0.07417858392000198,
-0.021651098504662514,
0.05678289383649826,
... |
ef4a5528-2592-4038-8704-7771d9f481f3 | description: 'Documentation for Check Table'
sidebar_label: 'CHECK TABLE'
sidebar_position: 41
slug: /sql-reference/statements/check-table
title: 'CHECK TABLE Statement'
doc_type: 'reference'
The
CHECK TABLE
query in ClickHouse is used to perform a validation check on a specific table or its partitions. It ensures the integrity of the data by verifying the checksums and other internal data structures.
Particularly it compares actual file sizes with the expected values which are stored on the server. If the file sizes do not match the stored values, it means the data is corrupted. This can be caused, for example, by a system crash during query execution.
:::warning
The `CHECK TABLE`` query may read all the data in the table and hold some resources, making it resource-intensive.
Consider the potential impact on performance and resource utilization before executing this query.
This query will not improve performance of the system and you should not execute it if you are not sure of what you are doing.
:::
Syntax {#syntax}
The basic syntax of the query is as follows:
sql
CHECK TABLE table_name [PARTITION partition_expression | PART part_name] [FORMAT format] [SETTINGS check_query_single_value_result = (0|1) [, other_settings]]
table_name
: Specifies the name of the table that you want to check.
partition_expression
: (Optional) If you want to check a specific partition of the table, you can use this expression to specify the partition.
part_name
: (Optional) If you want to check a specific part in the table, you can add string literal to specify a part name.
FORMAT format
: (Optional) Allows you to specify the output format of the result.
SETTINGS
: (Optional) Allows additional settings.
check_query_single_value_result
: (Optional) This setting allows you to toggle between a detailed result (
0
) or a summarized result (
1
).
Other settings can be applied as well. If you don't require a deterministic order for the results, you can set max_threads to a value greater than one to speed up the query.
The query response depends on the value of contains
check_query_single_value_result
setting.
In case of
check_query_single_value_result = 1
only
result
column with a single row is returned. Value inside this row is
1
if the integrity check is passed and
0
if data is corrupted.
With
check_query_single_value_result = 0
the query returns the following columns:
-
part_path
: Indicates the path to the data part or file name.
-
is_passed
: Returns 1 if the check for this part is successful, 0 otherwise.
-
message
: Any additional messages related to the check, such as errors or success messages.
The
CHECK TABLE
query supports the following table engines:
Log
TinyLog
StripeLog
MergeTree family
Performed over the tables with another table engines causes an
NOT_IMPLEMENTED
exception. | {"source_file": "check-table.md"} | [
0.032052598893642426,
0.006286249496042728,
-0.03627973794937134,
0.058873604983091354,
0.05363032594323158,
-0.08954688161611557,
0.09893869608640671,
0.052058618515729904,
-0.013943583704531193,
0.04190376400947571,
0.04266798496246338,
0.0198956411331892,
0.0603325217962265,
-0.05205412... |
cac7778e-158e-4493-b800-589d8b768ea6 | Log
TinyLog
StripeLog
MergeTree family
Performed over the tables with another table engines causes an
NOT_IMPLEMENTED
exception.
Engines from the
*Log
family do not provide automatic data recovery on failure. Use the
CHECK TABLE
query to track data loss in a timely manner.
Examples {#examples}
By default
CHECK TABLE
query shows the general table check status:
sql
CHECK TABLE test_table;
text
ββresultββ
β 1 β
ββββββββββ
If you want to see the check status for every individual data part you may use
check_query_single_value_result
setting.
Also, to check a specific partition of the table, you can use the
PARTITION
keyword.
sql
CHECK TABLE t0 PARTITION ID '201003'
FORMAT PrettyCompactMonoBlock
SETTINGS check_query_single_value_result = 0
Output:
text
ββpart_pathβββββ¬βis_passedββ¬βmessageββ
β 201003_7_7_0 β 1 β β
β 201003_3_3_0 β 1 β β
ββββββββββββββββ΄ββββββββββββ΄ββββββββββ
Similarly, you can check a specific part of the table by using the
PART
keyword.
sql
CHECK TABLE t0 PART '201003_7_7_0'
FORMAT PrettyCompactMonoBlock
SETTINGS check_query_single_value_result = 0
Output:
text
ββpart_pathβββββ¬βis_passedββ¬βmessageββ
β 201003_7_7_0 β 1 β β
ββββββββββββββββ΄ββββββββββββ΄ββββββββββ
Note that when part does not exist, the query returns an error:
sql
CHECK TABLE t0 PART '201003_111_222_0'
text
DB::Exception: No such data part '201003_111_222_0' to check in table 'default.t0'. (NO_SUCH_DATA_PART)
Receiving a 'Corrupted' Result {#receiving-a-corrupted-result}
:::warning
Disclaimer: The procedure described here, including the manual manipulating or removing files directly from the data directory, is for experimental or development environments only. Do
not
attempt this on a production server, as it may lead to data loss or other unintended consequences.
:::
Remove the existing checksum file:
bash
rm /var/lib/clickhouse-server/data/default/t0/201003_3_3_0/checksums.txt
```sql
CHECK TABLE t0 PARTITION ID '201003'
FORMAT PrettyCompactMonoBlock
SETTINGS check_query_single_value_result = 0
Output:
text
ββpart_pathβββββ¬βis_passedββ¬βmessageβββββββββββββββββββββββββββββββββββ
β 201003_7_7_0 β 1 β β
β 201003_3_3_0 β 1 β Checksums recounted and written to disk. β
ββββββββββββββββ΄ββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββ
If the checksums.txt file is missing, it can be restored. It will be recalculated and rewritten during the execution of the CHECK TABLE command for the specific partition, and the status will still be reported as 'is_passed = 1'.
You can check all existing
(Replicated)MergeTree
tables at once by using the
CHECK ALL TABLES
query.
sql
CHECK ALL TABLES
FORMAT PrettyCompactMonoBlock
SETTINGS check_query_single_value_result = 0 | {"source_file": "check-table.md"} | [
0.06799846887588501,
0.011700941249728203,
0.036049701273441315,
0.05232568457722664,
0.033200595527887344,
-0.03811910003423691,
0.04402003437280655,
0.04735599830746651,
-0.056809645146131516,
0.03980278596282005,
0.025270771235227585,
-0.08114791661500931,
-0.03288545086979866,
-0.00700... |
dad94620-1c5c-4a55-8160-8328976e124d | sql
CHECK ALL TABLES
FORMAT PrettyCompactMonoBlock
SETTINGS check_query_single_value_result = 0
text
ββdatabaseββ¬βtableβββββ¬βpart_pathββββ¬βis_passedββ¬βmessageββ
β default β t2 β all_1_95_3 β 1 β β
β db1 β table_01 β all_39_39_0 β 1 β β
β default β t1 β all_39_39_0 β 1 β β
β db1 β t1 β all_39_39_0 β 1 β β
β db1 β table_01 β all_1_6_1 β 1 β β
β default β t1 β all_1_6_1 β 1 β β
β db1 β t1 β all_1_6_1 β 1 β β
β db1 β table_01 β all_7_38_2 β 1 β β
β db1 β t1 β all_7_38_2 β 1 β β
β default β t1 β all_7_38_2 β 1 β β
ββββββββββββ΄βββββββββββ΄ββββββββββββββ΄ββββββββββββ΄ββββββββββ
If the Data Is Corrupted {#if-the-data-is-corrupted}
If the table is corrupted, you can copy the non-corrupted data to another table. To do this:
Create a new table with the same structure as damaged table. To do this execute the query
CREATE TABLE <new_table_name> AS <damaged_table_name>
.
Set the
max_threads
value to 1 to process the next query in a single thread. To do this run the query
SET max_threads = 1
.
Execute the query
INSERT INTO <new_table_name> SELECT * FROM <damaged_table_name>
. This request copies the non-corrupted data from the damaged table to another table. Only the data before the corrupted part will be copied.
Restart the
clickhouse-client
to reset the
max_threads
value. | {"source_file": "check-table.md"} | [
-0.008733032271265984,
-0.02744530513882637,
-0.0061845420859754086,
0.027387119829654694,
0.009908665902912617,
-0.047022487968206406,
0.034648995846509933,
-0.013917086645960808,
-0.10712555795907974,
0.06790824979543686,
0.06936315447092056,
-0.04525697976350784,
0.06329430639743805,
-0... |
bb612107-8935-4f06-be26-e4dcd2dbb3f1 | description: 'Documentation for Attach'
sidebar_label: 'ATTACH'
sidebar_position: 40
slug: /sql-reference/statements/attach
title: 'ATTACH Statement'
doc_type: 'reference'
Attaches a table or a dictionary, for example, when moving a database to another server.
Syntax
sql
ATTACH TABLE|DICTIONARY|DATABASE [IF NOT EXISTS] [db.]name [ON CLUSTER cluster] ...
The query does not create data on disk, but assumes that data is already in the appropriate places, and just adds information about the specified table, dictionary or database to the server. After executing the
ATTACH
query, the server will know about the existence of the table, dictionary or database.
If a table was previously detached (
DETACH
query), meaning that its structure is known, you can use shorthand without defining the structure.
Attach Existing Table {#attach-existing-table}
Syntax
sql
ATTACH TABLE [IF NOT EXISTS] [db.]name [ON CLUSTER cluster]
This query is used when starting the server. The server stores table metadata as files with
ATTACH
queries, which it simply runs at launch (with the exception of some system tables, which are explicitly created on the server).
If the table was detached permanently, it won't be reattached at the server start, so you need to use
ATTACH
query explicitly.
Create New Table And Attach Data {#create-new-table-and-attach-data}
With Specified Path to Table Data {#with-specified-path-to-table-data}
The query creates a new table with provided structure and attaches table data from the provided directory in
user_files
.
Syntax
sql
ATTACH TABLE name FROM 'path/to/data/' (col1 Type1, ...)
Example
Query:
sql
DROP TABLE IF EXISTS test;
INSERT INTO TABLE FUNCTION file('01188_attach/test/data.TSV', 'TSV', 's String, n UInt8') VALUES ('test', 42);
ATTACH TABLE test FROM '01188_attach/test' (s String, n UInt8) ENGINE = File(TSV);
SELECT * FROM test;
Result:
sql
ββsβββββ¬ββnββ
β test β 42 β
ββββββββ΄βββββ
With Specified Table UUID {#with-specified-table-uuid}
This query creates a new table with provided structure and attaches data from the table with the specified UUID.
It is supported by the
Atomic
database engine.
Syntax
sql
ATTACH TABLE name UUID '<uuid>' (col1 Type1, ...)
Attach MergeTree table as ReplicatedMergeTree {#attach-mergetree-table-as-replicatedmergetree}
Allows to attach non-replicated MergeTree table as ReplicatedMergeTree. ReplicatedMergeTree table will be created with values of
default_replica_path
and
default_replica_name
settings. It is also possible to attach a replicated table as a regular MergeTree.
Note that table's data in ZooKeeper is not affected in this query. This means you have to add metadata in ZooKeeper using
SYSTEM RESTORE REPLICA
or clear it with
SYSTEM DROP REPLICA ... FROM ZKPATH ...
after attach. | {"source_file": "attach.md"} | [
0.04367269575595856,
-0.07731927931308746,
-0.05301695689558983,
0.10545454919338226,
-0.00039785378612577915,
-0.044526975601911545,
0.05640744790434837,
0.00905918050557375,
0.0064026121981441975,
0.03064741939306259,
0.11400628834962845,
0.03228252753615379,
0.11766839772462845,
-0.0323... |
6d83c3ea-cf70-44aa-a41f-edd65f213f74 | If you are trying to add a replica to an existing ReplicatedMergeTree table, keep in mind that all the local data in converted MergeTree table will be detached.
Syntax
sql
ATTACH TABLE [db.]name AS [NOT] REPLICATED
Convert table to replicated
sql
DETACH TABLE test;
ATTACH TABLE test AS REPLICATED;
SYSTEM RESTORE REPLICA test;
Convert table to not replicated
Get ZooKeeper path and replica name for table:
sql
SELECT replica_name, zookeeper_path FROM system.replicas WHERE table='test';
Result:
sql
ββreplica_nameββ¬βzookeeper_pathββββββββββββββββββββββββββββββββββββββββββββββ
β r1 β /clickhouse/tables/401e6a1f-9bf2-41a3-a900-abb7e94dff98/s1 β
ββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Attach table as not replicated and delete replica's data from ZooKeeper:
sql
DETACH TABLE test;
ATTACH TABLE test AS NOT REPLICATED;
SYSTEM DROP REPLICA 'r1' FROM ZKPATH '/clickhouse/tables/401e6a1f-9bf2-41a3-a900-abb7e94dff98/s1';
Attach Existing Dictionary {#attach-existing-dictionary}
Attaches a previously detached dictionary.
Syntax
sql
ATTACH DICTIONARY [IF NOT EXISTS] [db.]name [ON CLUSTER cluster]
Attach Existing Database {#attach-existing-database}
Attaches a previously detached database.
Syntax
sql
ATTACH DATABASE [IF NOT EXISTS] name [ENGINE=<database engine>] [ON CLUSTER cluster] | {"source_file": "attach.md"} | [
0.0054078176617622375,
-0.06620870530605316,
-0.02219388633966446,
0.0005408228607848287,
-0.009236074984073639,
-0.0898931622505188,
0.0009585358784534037,
-0.03588828817009926,
-0.052407484501600266,
0.10684851557016373,
0.05488266795873642,
-0.07964596897363663,
0.10028739273548126,
-0.... |
5e0e5e04-2076-4a51-b556-249976edf28d | description: 'Documentation for UNDROP TABLE'
sidebar_label: 'UNDROP'
slug: /sql-reference/statements/undrop
title: 'UNDROP TABLE'
doc_type: 'reference'
UNDROP TABLE
Cancels the dropping of the table.
Beginning with ClickHouse version 23.3 it is possible to UNDROP a table in an Atomic database
within
database_atomic_delay_before_drop_table_sec
(8 minutes by default) of issuing the DROP TABLE statement. Dropped tables are listed in
a system table called
system.dropped_tables
.
If you have a materialized view without a
TO
clause associated with the dropped table, then you will also have to UNDROP the inner table of that view.
:::tip
Also see
DROP TABLE
:::
Syntax:
sql
UNDROP TABLE [db.]name [UUID '<uuid>'] [ON CLUSTER cluster]
Example
``sql
CREATE TABLE tab
(
id` UInt8
)
ENGINE = MergeTree
ORDER BY id;
DROP TABLE tab;
SELECT *
FROM system.dropped_tables
FORMAT Vertical;
```
```response
Row 1:
ββββββ
index: 0
database: default
table: tab
uuid: aa696a1a-1d70-4e60-a841-4c80827706cc
engine: MergeTree
metadata_dropped_path: /var/lib/clickhouse/metadata_dropped/default.tab.aa696a1a-1d70-4e60-a841-4c80827706cc.sql
table_dropped_time: 2023-04-05 14:12:12
1 row in set. Elapsed: 0.001 sec.
```
```sql
UNDROP TABLE tab;
SELECT *
FROM system.dropped_tables
FORMAT Vertical;
```response
Ok.
0 rows in set. Elapsed: 0.001 sec.
```
sql
DESCRIBE TABLE tab
FORMAT Vertical;
response
Row 1:
ββββββ
name: id
type: UInt8
default_type:
default_expression:
comment:
codec_expression:
ttl_expression: | {"source_file": "undrop.md"} | [
-0.030372200533747673,
-0.079108327627182,
0.0025564918760210276,
0.05756637826561928,
0.0028451019898056984,
-0.05226915702223778,
0.023358669131994247,
-0.03662547841668129,
0.005744471214711666,
0.010208165273070335,
0.07884840667247772,
-0.03452257439494133,
0.05759817734360695,
-0.071... |
7924d52a-742a-421d-9f9d-1b3e5f274c4f | description: 'Documentation for EXECUTE AS Statement'
sidebar_label: 'EXECUTE AS'
sidebar_position: 53
slug: /sql-reference/statements/execute_as
title: 'EXECUTE AS Statement'
doc_type: 'reference'
EXECUTE AS Statement
Allows to execute queries on behalf of a different user.
Syntax {#syntax}
sql
EXECUTE AS target_user;
EXECUTE AS target_user subquery;
The first form (without
subquery
) sets that all the following queries in the current session will be executed on behalf of the specified
target_user
.
The second form (with
subquery
) executes only the specified
subquery
on behalf of the specified
target_user
.
In order to work both forms require server setting
allow_impersonate_user
to be set to
1
and the
IMPERSONATE
privilege to be granted. For example, the following commands
sql
GRANT IMPERSONATE ON user1 TO user2;
GRANT IMPERSONATE ON * TO user3;
allow user
user2
to execute commands
EXECUTE AS user1 ...
and also allow user
user3
to execute commands as any user.
While impersonating another user function
currentUser()
returns the name of that other user,
and function
authenticatedUser()
returns the name of the user who has been actually authenticated.
Examples {#examples}
sql
SELECT currentUser(), authenticatedUser(); -- outputs "default default"
CREATE USER james;
EXECUTE AS james SELECT currentUser(), authenticatedUser(); -- outputs "james default" | {"source_file": "execute_as.md"} | [
-0.0362730547785759,
-0.016932599246501923,
-0.014233137480914593,
0.035233642905950546,
-0.13865765929222107,
-0.053523220121860504,
0.03844205290079117,
0.0571298748254776,
-0.04084833711385727,
-0.04210582375526428,
-0.02438696101307869,
-0.06712906807661057,
0.10691335797309875,
-0.044... |
f89ace83-cff7-42f6-bee1-94eb1d780107 | description: 'Documentation for Machine Learning Functions'
sidebar_label: 'Machine Learning'
slug: /sql-reference/functions/machine-learning-functions
title: 'Machine Learning Functions'
doc_type: 'reference'
Machine learning functions
evalMLMethod {#evalmlmethod}
Prediction using fitted regression models uses
evalMLMethod
function. See link in
linearRegression
.
stochasticLinearRegression {#stochasticlinearregression}
The
stochasticLinearRegression
aggregate function implements stochastic gradient descent method using linear model and MSE loss function. Uses
evalMLMethod
to predict on new data.
stochasticLogisticRegression {#stochasticlogisticregression}
The
stochasticLogisticRegression
aggregate function implements stochastic gradient descent method for binary classification problem. Uses
evalMLMethod
to predict on new data.
naiveBayesClassifier {#naivebayesclassifier}
Classifies input text using a Naive Bayes model with n-grams and Laplace smoothing. The model must be configured in ClickHouse before use.
Syntax
sql
naiveBayesClassifier(model_name, input_text);
Arguments
model_name
β Name of the pre-configured model.
String
The model must be defined in ClickHouse's configuration files (see below).
input_text
β Text to classify.
String
Input is processed exactly as provided (case/punctuation preserved).
Returned Value
- Predicted class ID as an unsigned integer.
UInt32
Class IDs correspond to categories defined during model construction.
Example
Classify text with a language detection model:
sql
SELECT naiveBayesClassifier('language', 'How are you?');
response
ββnaiveBayesClassifier('language', 'How are you?')ββ
β 0 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
Result
0
might represent English, while
1
could indicate French - class meanings depend on your training data.
Implementation Details {#implementation-details}
Algorithm
Uses Naive Bayes classification algorithm with
Laplace smoothing
to handle unseen n-grams based on n-gram probabilities based on
this
.
Key Features
- Supports n-grams of any size
- Three tokenization modes:
-
byte
: Operates on raw bytes. Each byte is one token.
-
codepoint
: Operates on Unicode scalar values decoded from UTFβ8. Each codepoint is one token.
-
token
: Splits on runs of Unicode whitespace (regex \s+). Tokens are substrings of nonβwhitespace; punctuation is part of the token if adjacent (e.g., "you?" is one token).
Model Configuration {#model-configuration}
You can find sample source code for creating a Naive Bayes model for language detection
here
.
Additionally, sample models and their associated config files are available
here
.
Here is an example configuration for a naive Bayes model in ClickHouse: | {"source_file": "machine-learning-functions.md"} | [
-0.10664293169975281,
-0.05567952245473862,
-0.0449596643447876,
0.041538119316101074,
0.01598476618528366,
-0.005206274800002575,
0.006123299710452557,
-0.007445662748068571,
-0.1092841625213623,
-0.03497990965843201,
0.02388245053589344,
-0.013550154864788055,
0.03883605822920799,
-0.134... |
4473fa76-0a4b-4939-af35-7c74e7ddd463 | Additionally, sample models and their associated config files are available
here
.
Here is an example configuration for a naive Bayes model in ClickHouse:
xml
<clickhouse>
<nb_models>
<model>
<name>sentiment</name>
<path>/etc/clickhouse-server/config.d/sentiment.bin</path>
<n>2</n>
<mode>token</mode>
<alpha>1.0</alpha>
<priors>
<prior>
<class>0</class>
<value>0.6</value>
</prior>
<prior>
<class>1</class>
<value>0.4</value>
</prior>
</priors>
</model>
</nb_models>
</clickhouse>
Configuration Parameters
| Parameter | Description | Example | Default |
| ---------- | --------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------- | ------------------ |
|
name
| Unique model identifier |
language_detection
|
Required
|
|
path
| Full path to model binary |
/etc/clickhouse-server/config.d/language_detection.bin
|
Required
|
|
mode
| Tokenization method:
-
byte
: Byte sequences
-
codepoint
: Unicode characters
-
token
: Word tokens |
token
|
Required
|
|
n
| N-gram size (
token
mode):
-
1
=single word
-
2
=word pairs
-
3
=word triplets |
2
|
Required
|
|
alpha
| Laplace smoothing factor used during classification to address n-grams that do not appear in the model |
0.5
|
1.0
|
|
priors
| Class probabilities (% of the documents belonging to a class) | 60% class 0, 40% class 1 | Equal distribution |
Model Training Guide
File Format
In human-readable format, for
n=1
and
token
mode, the model might look like this:
text
<class_id> <n-gram> <count>
0 excellent 15
1 refund 28
For
n=3
and
codepoint
mode, it might look like:
text
<class_id> <n-gram> <count>
0 exc 15
1 ref 28
Human-readable format is not used by ClickHouse directly; it must be converted to the binary format described below.
Binary Format Details | {"source_file": "machine-learning-functions.md"} | [
-0.02168753743171692,
-0.06968186795711517,
-0.044825565069913864,
0.009345421567559242,
-0.013228069059550762,
-0.011494615115225315,
-0.004889016039669514,
-0.009771364741027355,
-0.04910777509212494,
-0.0007026640814729035,
0.04490360617637634,
-0.032802194356918335,
0.06305602192878723,
... |
03f16ed8-a77d-4c1d-85fd-c56c10e7ef28 | text
<class_id> <n-gram> <count>
0 exc 15
1 ref 28
Human-readable format is not used by ClickHouse directly; it must be converted to the binary format described below.
Binary Format Details
Each n-gram stored as:
1. 4-byte
class_id
(UInt, little-endian)
2. 4-byte
n-gram
bytes length (UInt, little-endian)
3. Raw
n-gram
bytes
4. 4-byte
count
(UInt, little-endian)
Preprocessing Requirements
Before the model is being created from the document corpus, the documents must be preprocessed to extract n-grams according to the specified
mode
and
n
. The following steps outline the preprocessing:
1.
Add boundary markers at the start and end of each document based on tokenization mode:
-
Byte
:
0x01
(start),
0xFF
(end)
-
Codepoint
:
U+10FFFE
(start),
U+10FFFF
(end)
-
Token
:
<s>
(start),
</s>
(end)
Note:
(n - 1)
tokens are added at both the beginning and the end of the document.
Example for
n=3
in
token
mode:
Document:
"ClickHouse is fast"
Processed as:
<s> <s> ClickHouse is fast </s> </s>
Generated trigrams:
<s> <s> ClickHouse
<s> ClickHouse is
ClickHouse is fast
is fast </s>
fast </s> </s>
To simplify model creation for
byte
and
codepoint
modes, it may be convenient to first tokenize the document into tokens (a list of
byte
s for
byte
mode and a list of
codepoint
s for
codepoint
mode). Then, append
n - 1
start tokens at the beginning and
n - 1
end tokens at the end of the document. Finally, generate the n-grams and write them to the serialized file. | {"source_file": "machine-learning-functions.md"} | [
-0.026083583012223244,
-0.041869159787893295,
-0.036297012120485306,
-0.00834049191325903,
-0.0301427710801363,
0.03598916530609131,
0.016346454620361328,
0.005028336774557829,
-0.021816831082105637,
0.003414549631997943,
0.03948861360549927,
-0.03223325312137604,
0.0005321739008650184,
-0... |
947e3fc6-b3f9-4777-ba35-a18b4dcaea33 | description: 'Documentation for Introspection Functions'
sidebar_label: 'Introspection'
slug: /sql-reference/functions/introspection
title: 'Introspection Functions'
doc_type: 'reference'
Introspection functions
You can use functions described in this chapter to introspect
ELF
and
DWARF
for query profiling.
:::note
These functions are slow and may impose security considerations.
:::
For proper operation of introspection functions:
Install the
clickhouse-common-static-dbg
package.
Set the
allow_introspection_functions
setting to 1.
For security reasons introspection functions are disabled by default.
ClickHouse saves profiler reports to the
trace_log
system table. Make sure the table and profiler are configured properly.
demangle {#demangle}
Introduced in: v20.1
Converts a symbol to a C++ function name.
The symbol is usually returned by function
addressToSymbol
.
Syntax
sql
demangle(symbol)
Arguments
symbol
β Symbol from an object file.
String
Returned value
Returns the name of the C++ function, or an empty string if the symbol is not valid.
String
Examples
Selecting the first string from the
trace_log
system table
sql title=Query
SELECT * FROM system.trace_log LIMIT 1 \G;
response title=Response
-- The `trace` field contains the stack trace at the moment of sampling.
Row 1:
ββββββ
event_date: 2019-11-20
event_time: 2019-11-20 16:57:59
revision: 54429
timer_type: Real
thread_number: 48
query_id: 724028bf-f550-45aa-910d-2af6212b94ac
trace: [94138803686098,94138815010911,94138815096522,94138815101224,94138815102091,94138814222988,94138806823642,94138814457211,94138806823642,94138814457211,94138806823642,94138806795179,94138806796144,94138753770094,94138753771646,94138753760572,94138852407232,140399185266395,140399178045583]
Getting a function name for a single address
sql title=Query
SET allow_introspection_functions=1;
SELECT demangle(addressToSymbol(94138803686098)) \G;
response title=Response
Row 1:
ββββββ
demangle(addressToSymbol(94138803686098)): DB::IAggregateFunctionHelper<DB::AggregateFunctionSum<unsigned long, unsigned long, DB::AggregateFunctionSumData<unsigned long> > >::addBatchSinglePlace(unsigned long, char*, DB::IColumn const**, DB::Arena*) const
Applying the function to the whole stack trace
```sql title=Query
SET allow_introspection_functions=1;
-- The arrayMap function allows to process each individual element of the trace array by the demangle function.
-- The result of this processing is shown in the trace_functions column of output.
SELECT
arrayStringConcat(arrayMap(x -> demangle(addressToSymbol(x)), trace), '\n') AS trace_functions
FROM system.trace_log
LIMIT 1
\G
``` | {"source_file": "introspection.md"} | [
0.007991966791450977,
0.01501506194472313,
-0.0669582411646843,
0.048745740205049515,
-0.00438925065100193,
-0.10411418974399567,
0.03227027878165245,
0.0686979740858078,
-0.10375314205884933,
-0.012444132007658482,
0.017173796892166138,
-0.07160619646310806,
-0.013862582854926586,
-0.0952... |
8a089c6c-ff70-44a4-aa3f-a74d1dfad639 | SELECT
arrayStringConcat(arrayMap(x -> demangle(addressToSymbol(x)), trace), '\n') AS trace_functions
FROM system.trace_log
LIMIT 1
\G
```
response title=Response
Row 1:
ββββββ
trace_functions: DB::IAggregateFunctionHelper<DB::AggregateFunctionSum<unsigned long, unsigned long, DB::AggregateFunctionSumData<unsigned long> > >::addBatchSinglePlace(unsigned long, char*, DB::IColumn const**, DB::Arena*) const
DB::Aggregator::executeWithoutKeyImpl(char*&, unsigned long, DB::Aggregator::AggregateFunctionInstruction*, DB::Arena*) const
DB::Aggregator::executeOnBlock(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >, unsigned long, DB::AggregatedDataVariants&, std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*> >&, std::vector<std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*> >, std::allocator<std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*> > > >&, bool&)
DB::Aggregator::executeOnBlock(DB::Block const&, DB::AggregatedDataVariants&, std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*> >&, std::vector<std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*> >, std::allocator<std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*> > > >&, bool&)
DB::Aggregator::execute(std::shared_ptr<DB::IBlockInputStream> const&, DB::AggregatedDataVariants&)
DB::AggregatingBlockInputStream::readImpl()
DB::IBlockInputStream::read()
DB::ExpressionBlockInputStream::readImpl()
DB::IBlockInputStream::read()
DB::ExpressionBlockInputStream::readImpl()
DB::IBlockInputStream::read()
DB::AsynchronousBlockInputStream::calculate()
std::_Function_handler<void (), DB::AsynchronousBlockInputStream::next()::{lambda()#1}>::_M_invoke(std::_Any_data const&)
ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::_List_iterator<ThreadFromGlobalPool>)
ThreadFromGlobalPool::ThreadFromGlobalPool<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::function<void ()>, int, std::optional<unsigned long>)::{lambda()#3}>(ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::function<void ()>, int, std::optional<unsigned long>)::{lambda()#3}&&)::{lambda()#1}::operator()() const
ThreadPoolImpl<std::thread>::worker(std::_List_iterator<std::thread>)
execute_native_thread_routine
start_thread
clone
isMergeTreePartCoveredBy {#isMergeTreePartCoveredBy}
Introduced in: v25.6
Function which checks if the part of the first argument is covered by the part of the second argument.
Syntax
sql
isMergeTreePartCoveredBy(nested_part, covering_part)
Arguments
nested_part
β Name of expected nested part.
String
covering_part
β Name of expected covering part.
String
Returned value
Returns
1
if it covers,
0
otherwise.
UInt8
Examples
Basic example
sql title=Query
WITH 'all_12_25_7_4' AS lhs, 'all_7_100_10_20' AS rhs
SELECT isMergeTreePartCoveredBy(rhs, lhs), isMergeTreePartCoveredBy(lhs, rhs); | {"source_file": "introspection.md"} | [
0.009646288119256496,
-0.003577624913305044,
-0.03848889097571373,
0.039266571402549744,
-0.06091153994202614,
-0.02040228061378002,
0.08443237096071243,
-0.010885427705943584,
-0.030484827235341072,
-0.00212694238871336,
0.006964808329939842,
-0.05487857386469841,
-0.015274906530976295,
-... |
5d69e817-d7ad-4039-91d5-d46ba37fe0e2 | Examples
Basic example
sql title=Query
WITH 'all_12_25_7_4' AS lhs, 'all_7_100_10_20' AS rhs
SELECT isMergeTreePartCoveredBy(rhs, lhs), isMergeTreePartCoveredBy(lhs, rhs);
response title=Response
ββisMergeTreePartCoveredBy(rhs, lhs)ββ¬βisMergeTreePartCoveredBy(lhs, rhs)ββ
β 0 β 1 β
ββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββ
logTrace {#logTrace}
Introduced in: v20.12
Emits a trace log message to the server log for each
Block
.
Syntax
sql
logTrace(message)
Arguments
message
β Message that is emitted to the server log.
const String
Returned value
Returns
0
always.
UInt8
Examples
Basic example
sql title=Query
SELECT logTrace('logTrace message');
response title=Response
ββlogTrace('logTrace message')ββ
β 0 β
ββββββββββββββββββββββββββββββββ
mergeTreePartInfo {#mergeTreePartInfo}
Introduced in: v25.6
Function that helps to cut the useful values out of the
MergeTree
part name.
Syntax
sql
mergeTreePartInfo(part_name)
Arguments
part_name
β Name of part to unpack.
String
Returned value
Returns a Tuple with subcolumns:
partition_id
,
min_block
,
max_block
,
level
,
mutation
.
Tuple
Examples
Basic example
sql title=Query
WITH mergeTreePartInfo('all_12_25_7_4') AS info
SELECT info.partition_id, info.min_block, info.max_block, info.level, info.mutation;
response title=Response
ββinfo.partition_idββ¬βinfo.min_blockββ¬βinfo.max_blockββ¬βinfo.levelββ¬βinfo.mutationββ
β all β 12 β 25 β 7 β 4 β
βββββββββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββββ΄βββββββββββββ΄ββββββββββββββββ
tid {#tid}
Introduced in: v20.12
Returns id of the thread, in which the current
Block
is processed.
Syntax
sql
tid()
Arguments
None.
Returned value
Returns the current thread id.
UInt64
Examples
Usage example
sql title=Query
SELECT tid();
response title=Response
ββtid()ββ
β 3878 β
βββββββββ | {"source_file": "introspection.md"} | [
0.03349113464355469,
0.00013768469216302037,
0.0521891713142395,
0.04931649565696716,
-0.02160489931702614,
-0.09221044182777405,
0.07335416972637177,
0.03617161884903908,
0.021380510181188583,
0.011817106045782566,
0.009186695329844952,
-0.06371410936117172,
0.001838016789406538,
-0.06593... |
7f02876f-0776-4ada-98b5-4685f5dbf843 | description: 'Documentation for arrayJoin function'
sidebar_label: 'arrayJoin'
slug: /sql-reference/functions/array-join
title: 'arrayJoin function'
doc_type: 'reference'
arrayJoin function
This is a very unusual function.
Normal functions do not change a set of rows, but just change the values in each row (map).
Aggregate functions compress a set of rows (fold or reduce).
The
arrayJoin
function takes each row and generates a set of rows (unfold).
This function takes an array as an argument, and propagates the source row to multiple rows for the number of elements in the array.
All the values in columns are simply copied, except the values in the column where this function is applied; it is replaced with the corresponding array value.
:::note
If the array is empty,
arrayJoin
produces no rows.
To return a single row containing the default value of the array type, you can wrap it with
emptyArrayToSingle
, for example:
arrayJoin(emptyArrayToSingle(...))
.
:::
For example:
sql title="Query"
SELECT arrayJoin([1, 2, 3] AS src) AS dst, 'Hello', src
text title="Response"
ββdstββ¬β\'Hello\'ββ¬βsrcββββββ
β 1 β Hello β [1,2,3] β
β 2 β Hello β [1,2,3] β
β 3 β Hello β [1,2,3] β
βββββββ΄ββββββββββββ΄ββββββββββ
The
arrayJoin
function affects all sections of the query, including the
WHERE
section. Notice in that the result of the query below is
2
, even though the subquery returned 1 row.
sql title="Query"
SELECT sum(1) AS impressions
FROM
(
SELECT ['Istanbul', 'Berlin', 'Babruysk'] AS cities
)
WHERE arrayJoin(cities) IN ['Istanbul', 'Berlin'];
text title="Response"
ββimpressionsββ
β 2 β
βββββββββββββββ
A query can use multiple
arrayJoin
functions. In this case, the transformation is performed multiple times and the rows are multiplied.
For example:
sql title="Query"
SELECT
sum(1) AS impressions,
arrayJoin(cities) AS city,
arrayJoin(browsers) AS browser
FROM
(
SELECT
['Istanbul', 'Berlin', 'Babruysk'] AS cities,
['Firefox', 'Chrome', 'Chrome'] AS browsers
)
GROUP BY
2,
3
text title="Response"
ββimpressionsββ¬βcityββββββ¬βbrowserββ
β 2 β Istanbul β Chrome β
β 1 β Istanbul β Firefox β
β 2 β Berlin β Chrome β
β 1 β Berlin β Firefox β
β 2 β Babruysk β Chrome β
β 1 β Babruysk β Firefox β
βββββββββββββββ΄βββββββββββ΄ββββββββββ
Best practice {#important-note}
Using multiple
arrayJoin
with same expression may not produce expected results due to the elimination of common subexpressions.
In those cases, consider modifying repeated array expressions with extra operations that do not affect the join result. For example,
arrayJoin(arraySort(arr))
,
arrayJoin(arrayConcat(arr, []))
Example: | {"source_file": "array-join.md"} | [
-0.010787762701511383,
0.026271313428878784,
-0.0013831183314323425,
0.033422619104385376,
-0.11092997342348099,
0.02122829109430313,
0.04806188493967056,
-0.03992534056305885,
-0.03543199598789215,
-0.010034301318228245,
-0.03420504182577133,
0.017078658565878868,
-0.003785340813919902,
-... |
106ffc30-6e82-4445-bc39-9a7d2dae5318 | Example:
sql
SELECT
arrayJoin(dice) AS first_throw,
/* arrayJoin(dice) as second_throw */ -- is technically correct, but will annihilate result set
arrayJoin(arrayConcat(dice, [])) AS second_throw -- intentionally changed expression to force re-evaluation
FROM (
SELECT [1, 2, 3, 4, 5, 6] AS dice
);
Note the
ARRAY JOIN
syntax in the SELECT query, which provides broader possibilities.
ARRAY JOIN
allows you to convert multiple arrays with the same number of elements at a time.
Example:
sql
SELECT
sum(1) AS impressions,
city,
browser
FROM
(
SELECT
['Istanbul', 'Berlin', 'Babruysk'] AS cities,
['Firefox', 'Chrome', 'Chrome'] AS browsers
)
ARRAY JOIN
cities AS city,
browsers AS browser
GROUP BY
2,
3
text
ββimpressionsββ¬βcityββββββ¬βbrowserββ
β 1 β Istanbul β Firefox β
β 1 β Berlin β Chrome β
β 1 β Babruysk β Chrome β
βββββββββββββββ΄βββββββββββ΄ββββββββββ
Or you can use
Tuple
Example:
sql title="Query"
SELECT
sum(1) AS impressions,
(arrayJoin(arrayZip(cities, browsers)) AS t).1 AS city,
t.2 AS browser
FROM
(
SELECT
['Istanbul', 'Berlin', 'Babruysk'] AS cities,
['Firefox', 'Chrome', 'Chrome'] AS browsers
)
GROUP BY
2,
3
text title="Row"
ββimpressionsββ¬βcityββββββ¬βbrowserββ
β 1 β Istanbul β Firefox β
β 1 β Berlin β Chrome β
β 1 β Babruysk β Chrome β
βββββββββββββββ΄βββββββββββ΄ββββββββββ
The name
arrayJoin
in ClickHouse comes from its conceptual similarity to the JOIN operation, but applied to arrays within a single row. While traditional JOINs combine rows from different tables,
arrayJoin
"joins" each element of an array in a row, producing multiple rows - one for each array element - while duplicating the other column values. ClickHouse also provides the
ARRAY JOIN
clause syntax, which makes this relationship to traditional JOIN operations even more explicit by using familiar SQL JOIN terminology. This process is also referred to as "unfolding" the array, but the term "join" is used in both the function name and clause because it resembles joining the table with the array elements, effectively expanding the dataset in a way similar to a JOIN operation. | {"source_file": "array-join.md"} | [
0.04203331843018532,
-0.027276230975985527,
-0.023901386186480522,
0.051273319870233536,
-0.08346503973007202,
0.0014409320428967476,
0.12009790539741516,
-0.09834498912096024,
-0.07806000113487244,
-0.04790956526994705,
-0.07167913019657135,
-0.015550068579614162,
0.031106488779187202,
-0... |
2f670286-de31-4eac-9587-e3ceb07bb49a | description: 'Documentation for Functions for Searching in Strings'
sidebar_label: 'String search'
slug: /sql-reference/functions/string-search-functions
title: 'Functions for Searching in Strings'
doc_type: 'reference'
Functions for Searching in Strings
All functions in this section search case-sensitively by default. Case-insensitive search is usually provided by separate function variants.
:::note
Case-insensitive search follows the lowercase-uppercase rules of the English language. E.g. Uppercased
i
in the English language is
I
whereas in the Turkish language it is
Δ°
- results for languages other than English may be unexpected.
:::
Functions in this section also assume that the searched string (referred to in this section as
haystack
) and the search string (referred to in this section as
needle
) are single-byte encoded text. If this assumption is
violated, no exception is thrown and results are undefined. Search with UTF-8 encoded strings is usually provided by separate function
variants. Likewise, if a UTF-8 function variant is used and the input strings are not UTF-8 encoded text, no exception is thrown and the
results are undefined. Note that no automatic Unicode normalization is performed, however you can use the
normalizeUTF8*()
functions for that.
General strings functions
and
functions for replacing in strings
are described separately.
:::note
The documentation below is generated from the
system.functions
system table.
:::
countMatches {#countMatches}
Introduced in: v21.1
Returns number of matches of a regular expression in a string.
:::note Version dependent behavior
The behavior of this function depends on the ClickHouse version:
in versions < v25.6, the function stops counting at the first empty match even if a pattern accepts.
in versions >= 25.6, the function continues execution when an empty match occurs. The legacy behavior can be restored using setting
count_matches_stop_at_empty_match = true
;
:::
Syntax
sql
countMatches(haystack, pattern)
Arguments
haystack
β The string to search in.
String
pattern
β Regular expression pattern.
String
Returned value
Returns the number of matches found.
UInt64
Examples
Count digit sequences
sql title=Query
SELECT countMatches('hello 123 world 456 test', '[0-9]+')
response title=Response
ββcountMatches('hello 123 world 456 test', '[0-9]+')ββ
β 2 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
countMatchesCaseInsensitive {#countMatchesCaseInsensitive}
Introduced in: v21.1
Like
countMatches
but performs case-insensitive matching.
Syntax
sql
countMatchesCaseInsensitive(haystack, pattern)
Arguments
haystack
β The string to search in.
String
pattern
β Regular expression pattern.
const String
Returned value
Returns the number of matches found.
UInt64
Examples
Case insensitive count | {"source_file": "string-search-functions.md"} | [
-0.048609521239995956,
0.009388115257024765,
0.043351758271455765,
0.010425593703985214,
-0.06271812319755554,
-0.02169616147875786,
0.04135260358452797,
0.040499914437532425,
-0.014703950844705105,
-0.06069750338792801,
0.05032828822731972,
0.004494879860430956,
0.09964679181575775,
-0.00... |
f3e991dc-9b0c-4f04-8b12-1709b2a40e63 | pattern
β Regular expression pattern.
const String
Returned value
Returns the number of matches found.
UInt64
Examples
Case insensitive count
sql title=Query
SELECT countMatchesCaseInsensitive('Hello HELLO world', 'hello')
response title=Response
ββcountMatchesCaseInsensitive('Hello HELLO world', 'hello')ββ
β 2 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
countSubstrings {#countSubstrings}
Introduced in: v21.1
Returns how often a substring
needle
occurs in a string
haystack
.
Syntax
sql
countSubstrings(haystack, needle[, start_pos])
Arguments
haystack
β String in which the search is performed.
String
or
Enum
. -
needle
β Substring to be searched.
String
. -
start_pos
β Position (1-based) in
haystack
at which the search starts.
UInt
. Optional.
Returned value
The number of occurrences.
UInt64
Examples
Usage example
sql title=Query
SELECT countSubstrings('aaaa', 'aa');
response title=Response
ββcountSubstrings('aaaa', 'aa')ββ
β 2 β
βββββββββββββββββββββββββββββββββ
With start_pos argument
sql title=Query
SELECT countSubstrings('abc___abc', 'abc', 4);
response title=Response
ββcountSubstrings('abc___abc', 'abc', 4)ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββββββ
countSubstringsCaseInsensitive {#countSubstringsCaseInsensitive}
Introduced in: v21.1
Like
countSubstrings
but counts case-insensitively.
Syntax
sql
countSubstringsCaseInsensitive(haystack, needle[, start_pos])
Arguments
haystack
β String in which the search is performed.
String
or
Enum
needle
β Substring to be searched.
String
start_pos
β Optional. Position (1-based) in
haystack
at which the search starts.
UInt*
Returned value
Returns the number of occurrences of the neddle in the haystack.
UInt64
Examples
Usage example
sql title=Query
SELECT countSubstringsCaseInsensitive('AAAA', 'aa');
response title=Response
ββcountSubstriβ―AAA', 'aa')ββ
β 2 β
ββββββββββββββββββββββββββββ
With start_pos argument
sql title=Query
SELECT countSubstringsCaseInsensitive('abc___ABC___abc', 'abc', 4);
response title=Response
ββcountSubstriβ―, 'abc', 4)ββ
β 2 β
ββββββββββββββββββββββββββββ
countSubstringsCaseInsensitiveUTF8 {#countSubstringsCaseInsensitiveUTF8}
Introduced in: v21.1
Like
countSubstrings
but counts case-insensitively and assumes that haystack is a UTF-8 string.
Syntax
sql
countSubstringsCaseInsensitiveUTF8(haystack, needle[, start_pos])
Arguments
haystack
β UTF-8 string in which the search is performed.
String
or
Enum
needle
β Substring to be searched.
String
start_pos
β Optional. Position (1-based) in
haystack
at which the search starts.
UInt*
Returned value
Returns the number of occurrences of the needle in the haystack.
UInt64
Examples | {"source_file": "string-search-functions.md"} | [
0.013184089213609695,
-0.03432334586977959,
0.0036528136115521193,
0.038984715938568115,
-0.09755318611860275,
0.030225781723856926,
0.04427157714962959,
0.0710507407784462,
-0.014792685396969318,
-0.02949737384915352,
0.02566014975309372,
-0.016380013898015022,
0.09723343700170517,
-0.060... |
799b3b55-298d-4398-bedf-f8a89b70ee9e | start_pos
β Optional. Position (1-based) in
haystack
at which the search starts.
UInt*
Returned value
Returns the number of occurrences of the needle in the haystack.
UInt64
Examples
Usage example
sql title=Query
SELECT countSubstringsCaseInsensitiveUTF8('Π»ΠΎΠΆΠΊΠ°, ΠΊΠΎΡΠΊΠ°, ΠΊΠ°ΡΡΠΎΡΠΊΠ°', 'ΠΠ');
response title=Response
ββcountSubstriβ―ΡΠΊΠ°', 'ΠΠ')ββ
β 4 β
ββββββββββββββββββββββββββββ
With start_pos argument
sql title=Query
SELECT countSubstringsCaseInsensitiveUTF8('Π»ΠΎΠΆΠΊΠ°, ΠΊΠΎΡΠΊΠ°, ΠΊΠ°ΡΡΠΎΡΠΊΠ°', 'ΠΠ', 13);
response title=Response
ββcountSubstriβ―, 'ΠΠ', 13)ββ
β 2 β
ββββββββββββββββββββββββββββ
extract {#extract}
Introduced in: v1.1
Extracts the first match of a regular expression in a string.
If 'haystack' doesn't match 'pattern', an empty string is returned.
This function uses the RE2 regular expression library. Please refer to
re2
for supported syntax.
If the regular expression has capturing groups (sub-patterns), the function matches the input string against the first capturing group.
Syntax
sql
extract(haystack, pattern)
Arguments
haystack
β String from which to extract.
String
pattern
β Regular expression, typically containing a capturing group.
const String
Returned value
Returns extracted fragment as a string.
String
Examples
Extract domain from email
sql title=Query
SELECT extract('test@clickhouse.com', '.*@(.*)$')
response title=Response
ββextract('test@clickhouse.com', '.*@(.*)$')ββ
β clickhouse.com β
βββββββββββββββββββββββββββββββββββββββββββββ
No match returns empty string
sql title=Query
SELECT extract('test@clickhouse.com', 'no_match')
response title=Response
ββextract('test@clickhouse.com', 'no_match')ββ
β β
ββββββββββββββββββββββββββββββββββββββββββββββ
extractAll {#extractAll}
Introduced in: v1.1
Like
extract
, but returns an array of all matches of a regular expression in a string.
If 'haystack' doesn't match the 'pattern' regex, an empty array is returned.
If the regular expression has capturing groups (sub-patterns), the function matches the input string against the first capturing group.
Syntax
sql
extractAll(haystack, pattern)
Arguments
haystack
β String from which to extract fragments.
String
pattern
β Regular expression, optionally containing capturing groups.
const String
Returned value
Returns array of extracted fragments.
Array(String)
Examples
Extract all numbers
sql title=Query
SELECT extractAll('hello 123 world 456', '[0-9]+')
response title=Response
ββextractAll('hello 123 world 456', '[0-9]+')ββ
β ['123','456'] β
βββββββββββββββββββββββββββββββββββββββββββββββ
Extract using capturing group
sql title=Query
SELECT extractAll('test@example.com, user@domain.org', '([a-zA-Z0-9]+)@') | {"source_file": "string-search-functions.md"} | [
-0.013385084457695484,
0.02649032510817051,
0.01335521787405014,
0.030499370768666267,
-0.01943165436387062,
0.054707374423742294,
0.06035357341170311,
0.0175657719373703,
0.027401093393564224,
-0.06475602090358734,
-0.0032157187815755606,
-0.027592675760388374,
0.0414629802107811,
-0.0689... |
c68abd11-1bd5-471c-970b-0368d0dd0d02 | Extract using capturing group
sql title=Query
SELECT extractAll('test@example.com, user@domain.org', '([a-zA-Z0-9]+)@')
response title=Response
ββextractAll('test@example.com, user@domain.org', '([a-zA-Z0-9]+)@')ββ
β ['test','user'] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
extractAllGroupsHorizontal {#extractAllGroupsHorizontal}
Introduced in: v20.5
Matches all groups of a string using the provided regular expression and returns an array of arrays, where each array contains all captures from the same capturing group, organized by group number.
Syntax
sql
extractAllGroupsHorizontal(s, regexp)
Arguments
s
β Input string to extract from.
String
or
FixedString
regexp
β Regular expression to match by.
const String
or
const FixedString
Returned value
Returns an array of arrays, where each inner array contains all captures from one capturing group across all matches. The first inner array contains all captures from group 1, the second from group 2, etc. If no matches are found, returns an empty array.
Array(Array(String))
Examples
Usage example
sql title=Query
WITH '< Server: nginx
< Date: Tue, 22 Jan 2019 00:26:14 GMT
< Content-Type: text/html; charset=UTF-8
< Connection: keep-alive
' AS s
SELECT extractAllGroupsHorizontal(s, '< ([\\w\\-]+): ([^\\r\\n]+)');
response title=Response
[['Server','Date','Content-Type','Connection'],['nginx','Tue, 22 Jan 2019 00:26:14 GMT','text/html; charset=UTF-8','keep-alive']]
extractGroups {#extractGroups}
Introduced in: v20.5
Extracts all groups from non-overlapping substrings matched by a regular expression.
Syntax
sql
extractAllGroups(s, regexp)
Arguments
s
β Input string to extract from.
String
or
FixedString
regexp
β Regular expression. Constant.
const String
or
const FixedString
Returned value
If the function finds at least one matching group, it returns Array(Array(String)) column, clustered by group_id (
1
to
N
, where
N
is number of capturing groups in regexp). If there is no matching group, it returns an empty array.
Array(Array(String))
Examples
Usage example
sql title=Query
WITH '< Server: nginx
< Date: Tue, 22 Jan 2019 00:26:14 GMT
< Content-Type: text/html; charset=UTF-8
< Connection: keep-alive
' AS s
SELECT extractAllGroups(s, '< ([\\w\\-]+): ([^\\r\\n]+)');
response title=Response
[['Server','nginx'],['Date','Tue, 22 Jan 2019 00:26:14 GMT'],['Content-Type','text/html; charset=UTF-8'],['Connection','keep-alive']]
hasAllTokens {#hasAllTokens}
Introduced in: v25.7
Like
hasAnyTokens
, but returns 1, if all tokens in the
needle
string or array match the
input
string, and 0 otherwise. If
input
is a column, returns all rows that satisfy this condition. | {"source_file": "string-search-functions.md"} | [
-0.01766580156981945,
0.02342037670314312,
0.030880074948072433,
0.06733550131320953,
0.03916221484541893,
-0.005615190137177706,
0.08307668566703796,
-0.059117160737514496,
-0.023987671360373497,
-0.04032910615205765,
-0.010465915314853191,
-0.014724140986800194,
0.039464764297008514,
-0.... |
725fd573-de1b-413c-b98c-a825918421a2 | Like
hasAnyTokens
, but returns 1, if all tokens in the
needle
string or array match the
input
string, and 0 otherwise. If
input
is a column, returns all rows that satisfy this condition.
:::note
Column
input
should have a
text index
defined for optimal performance.
If no text index is defined, the function performs a brute-force column scan which is orders of magnitude slower than an index lookup.
:::
Prior to searching, the function tokenizes
- the
input
argument (always), and
- the
needle
argument (if given as a
String
)
using the tokenizer specified for the text index.
If the column has no text index defined, the
splitByNonAlpha
tokenizer is used instead.
If the
needle
argument is of type
Array(String)
, each array element is treated as a token β no additional tokenization takes place.
Duplicate tokens are ignored.
For example, needles = ['ClickHouse', 'ClickHouse'] is treated the same as ['ClickHouse'].
Syntax
sql
hasAllTokens(input, needles)
Aliases
:
hasAllToken
Arguments
input
β The input column.
String
or
FixedString
or
Array(String)
or
Array(FixedString)
needles
β Tokens to be searched. Supports at most 64 tokens.
String
or
Array(String)
Returned value
Returns 1, if all needles match. 0, otherwise.
UInt8
Examples
Usage example for a string column
```sql title=Query
CREATE TABLE table (
id UInt32,
msg String,
INDEX idx(msg) TYPE text(tokenizer = splitByString(['()', '\']))
)
ENGINE = MergeTree
ORDER BY id;
INSERT INTO table VALUES (1, '()a,\bc()d'), (2, '()\a()bc\d'), (3, ',()a\,bc,(),d,');
SELECT count() FROM table WHERE hasAllTokens(msg, 'a\d()');
```
response title=Response
ββcount()ββ
β 1 β
βββββββββββ
Specify needles to be searched for AS-IS (no tokenization) in an array
sql title=Query
SELECT count() FROM table WHERE hasAllTokens(msg, ['a', 'd']);
response title=Response
ββcount()ββ
β 1 β
βββββββββββ
Generate needles using the
tokens
function
sql title=Query
SELECT count() FROM table WHERE hasAllTokens(msg, tokens('a()d', 'splitByString', ['()', '\\']));
response title=Response
ββcount()ββ
β 1 β
βββββββββββ
Usage examples for array and map columns
```sql title=Query
CREATE TABLE log (
id UInt32,
tags Array(String),
attributes Map(String, String),
INDEX idx_tags (tags) TYPE text(tokenizer = splitByNonAlpha),
INDEX idx_attributes_keys mapKeys(attributes) TYPE text(tokenizer = array),
INDEX idx_attributes_vals mapValues(attributes) TYPE text(tokenizer = array)
)
ENGINE = MergeTree
ORDER BY id;
INSERT INTO log VALUES
(1, ['clickhouse', 'clickhouse cloud'], {'address': '192.0.0.1', 'log_level': 'INFO'}),
(2, ['chdb'], {'embedded': 'true', 'log_level': 'DEBUG'});
```
```response title=Response
```
Example with an array column
sql title=Query
SELECT count() FROM log WHERE hasAllTokens(tags, 'clickhouse');
response title=Response
ββcount()ββ
β 1 β
βββββββββββ | {"source_file": "string-search-functions.md"} | [
0.0022167828865349293,
0.01077660545706749,
-0.030603284016251564,
0.013578344136476517,
-0.08012739568948746,
0.00034943135688081384,
0.0973675400018692,
0.018574753776192665,
-0.039334334433078766,
0.018407320603728294,
-0.003849343629553914,
0.011466843076050282,
0.06769777834415436,
-0... |
b67d8371-9871-44aa-b946-43f2564da348 | ```
Example with an array column
sql title=Query
SELECT count() FROM log WHERE hasAllTokens(tags, 'clickhouse');
response title=Response
ββcount()ββ
β 1 β
βββββββββββ
Example with mapKeys
sql title=Query
SELECT count() FROM log WHERE hasAllTokens(mapKeys(attributes), ['address', 'log_level']);
response title=Response
ββcount()ββ
β 1 β
βββββββββββ
Example with mapValues
sql title=Query
SELECT count() FROM log WHERE hasAllTokens(mapValues(attributes), ['192.0.0.1', 'DEBUG']);
response title=Response
ββcount()ββ
β 0 β
βββββββββββ
hasAnyTokens {#hasAnyTokens}
Introduced in: v25.7
Returns 1, if at least one token in the
needle
string or array matches the
input
string, and 0 otherwise. If
input
is a column, returns all rows that satisfy this condition.
:::note
Column
input
should have a
text index
defined for optimal performance.
If no text index is defined, the function performs a brute-force column scan which is orders of magnitude slower than an index lookup.
:::
Prior to searching, the function tokenizes
- the
input
argument (always), and
- the
needle
argument (if given as a
String
)
using the tokenizer specified for the text index.
If the column has no text index defined, the
splitByNonAlpha
tokenizer is used instead.
If the
needle
argument is of type
Array(String)
, each array element is treated as a token β no additional tokenization takes place.
Duplicate tokens are ignored.
For example, ['ClickHouse', 'ClickHouse'] is treated the same as ['ClickHouse'].
Syntax
sql
hasAnyTokens(input, needles)
Aliases
:
hasAnyToken
Arguments
input
β The input column.
String
or
FixedString
or
Array(String)
or
Array(FixedString)
needles
β Tokens to be searched. Supports at most 64 tokens.
String
or
Array(String)
Returned value
Returns
1
, if there was at least one match.
0
, otherwise.
UInt8
Examples
Usage example for a string column
```sql title=Query
CREATE TABLE table (
id UInt32,
msg String,
INDEX idx(msg) TYPE text(tokenizer = splitByString(['()', '\']))
)
ENGINE = MergeTree
ORDER BY id;
INSERT INTO table VALUES (1, '()a,\bc()d'), (2, '()\a()bc\d'), (3, ',()a\,bc,(),d,');
SELECT count() FROM table WHERE hasAnyTokens(msg, 'a\d()');
```
response title=Response
ββcount()ββ
β 3 β
βββββββββββ
Specify needles to be searched for AS-IS (no tokenization) in an array
sql title=Query
SELECT count() FROM table WHERE hasAnyTokens(msg, ['a', 'd']);
response title=Response
ββcount()ββ
β 3 β
βββββββββββ
Generate needles using the
tokens
function
sql title=Query
SELECT count() FROM table WHERE hasAnyTokens(msg, tokens('a()d', 'splitByString', ['()', '\\']));
response title=Response
ββcount()ββ
β 3 β
βββββββββββ
Usage examples for array and map columns | {"source_file": "string-search-functions.md"} | [
0.04208816960453987,
0.006652913987636566,
-0.01419549435377121,
0.037170957773923874,
-0.053053002804517746,
-0.03946954384446144,
0.0881086066365242,
-0.001424463465809822,
0.009050861932337284,
0.008651675656437874,
0.03199733421206474,
-0.04003213346004486,
0.1002156063914299,
-0.10199... |
10e964a3-5ad4-4f0b-9a8d-fa9427538c60 | response title=Response
ββcount()ββ
β 3 β
βββββββββββ
Usage examples for array and map columns
```sql title=Query
CREATE TABLE log (
id UInt32,
tags Array(String),
attributes Map(String, String),
INDEX idx_tags (tags) TYPE text(tokenizer = splitByNonAlpha),
INDEX idx_attributes_keys mapKeys(attributes) TYPE text(tokenizer = array),
INDEX idx_attributes_vals mapValues(attributes) TYPE text(tokenizer = array)
)
ENGINE = MergeTree
ORDER BY id;
INSERT INTO log VALUES
(1, ['clickhouse', 'clickhouse cloud'], {'address': '192.0.0.1', 'log_level': 'INFO'}),
(2, ['chdb'], {'embedded': 'true', 'log_level': 'DEBUG'});
```
```response title=Response
```
Example with an array column
sql title=Query
SELECT count() FROM log WHERE hasAnyTokens(tags, 'clickhouse');
response title=Response
ββcount()ββ
β 1 β
βββββββββββ
Example with mapKeys
sql title=Query
SELECT count() FROM log WHERE hasAnyTokens(mapKeys(attributes), ['address', 'log_level']);
response title=Response
ββcount()ββ
β 2 β
βββββββββββ
Example with mapValues
sql title=Query
SELECT count() FROM log WHERE hasAnyTokens(mapValues(attributes), ['192.0.0.1', 'DEBUG']);
response title=Response
ββcount()ββ
β 2 β
βββββββββββ
hasSubsequence {#hasSubsequence}
Introduced in: v23.7
Checks if a needle is a subsequence of a haystack.
A subsequence of a string is a sequence that can be derived from another string by deleting some or no characters without changing the order of the remaining characters.
Syntax
sql
hasSubsequence(haystack, needle)
Arguments
haystack
β String in which to search for the subsequence.
String
needle
β Subsequence to be searched.
String
Returned value
Returns
1
if needle is a subsequence of haystack,
0
otherwise.
UInt8
Examples
Basic subsequence check
sql title=Query
SELECT hasSubsequence('Hello World', 'HlWrd')
response title=Response
ββhasSubsequence('Hello World', 'HlWrd')ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββββββ
No subsequence found
sql title=Query
SELECT hasSubsequence('Hello World', 'xyz')
response title=Response
ββhasSubsequence('Hello World', 'xyz')ββ
β 0 β
ββββββββββββββββββββββββββββββββββββββββ
hasSubsequenceCaseInsensitive {#hasSubsequenceCaseInsensitive}
Introduced in: v23.7
Like
hasSubsequence
but searches case-insensitively.
Syntax
sql
hasSubsequenceCaseInsensitive(haystack, needle)
Arguments
haystack
β String in which the search is performed.
String
needle
β Subsequence to be searched.
String
Returned value
Returns 1, if needle is a subsequence of haystack, 0 otherwise.
UInt8
Examples
Usage example
sql title=Query
SELECT hasSubsequenceCaseInsensitive('garbage', 'ARG'); | {"source_file": "string-search-functions.md"} | [
0.09784103184938431,
0.05176147446036339,
0.02379509061574936,
0.04292067885398865,
-0.06738274544477463,
-0.045900288969278336,
0.08831073343753815,
-0.003359483554959297,
0.00769758177921176,
0.005641532596200705,
0.0528191439807415,
-0.06233321130275726,
0.09365823864936829,
-0.07761996... |
d2592897-f258-45e2-b9fc-5fa3b75893bc | Returned value
Returns 1, if needle is a subsequence of haystack, 0 otherwise.
UInt8
Examples
Usage example
sql title=Query
SELECT hasSubsequenceCaseInsensitive('garbage', 'ARG');
response title=Response
ββhasSubsequenceCaseInsensitive('garbage', 'ARG')ββ
β 1 β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
hasSubsequenceCaseInsensitiveUTF8 {#hasSubsequenceCaseInsensitiveUTF8}
Introduced in: v23.7
Like
hasSubsequenceUTF8
but searches case-insensitively.
Syntax
sql
hasSubsequenceCaseInsensitiveUTF8(haystack, needle)
Arguments
haystack
β UTF8-encoded string in which the search is performed.
String
needle
β UTF8-encoded subsequence string to be searched.
String
Returned value
Returns 1, if needle is a subsequence of haystack, 0 otherwise.
UInt8
Examples
Usage example
sql title=Query
SELECT hasSubsequenceCaseInsensitiveUTF8('ClickHouse - ΡΡΠΎΠ»Π±ΡΠΎΠ²Π°Ρ ΡΠΈΡΡΠ΅ΠΌΠ° ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ Π±Π°Π·Π°ΠΌΠΈ Π΄Π°Π½Π½ΡΡ
', 'Π‘ΠΠ‘Π’ΠΠΠ');
response title=Response
ββhasSubsequenβ― 'Π‘ΠΠ‘Π’ΠΠΠ')ββ
β 1 β
ββββββββββββββββββββββββββββ
hasSubsequenceUTF8 {#hasSubsequenceUTF8}
Introduced in: v23.7
Like
hasSubsequence
but assumes haystack and needle are UTF-8 encoded strings.
Syntax
sql
hasSubsequenceUTF8(haystack, needle)
Arguments
haystack
β The string in which to search.
String
needle
β The subsequence to search for.
String
Returned value
Returns
1
if
needle
is a subsequence of
haystack
, otherwise
0
.
UInt8
Examples
Usage example
sql title=Query
SELECT hasSubsequenceUTF8('ΠΊΠ°ΡΡΠΎΡΠΊΠ°', 'ΠΊΠΎΡΠΊΠ°');
response title=Response
ββhasSubsequenβ―', 'ΠΊΠΎΡΠΊΠ°')ββ
β 1 β
ββββββββββββββββββββββββββββ
Non-matching subsequence
sql title=Query
SELECT hasSubsequenceUTF8('ΠΊΠ°ΡΡΠΎΡΠΊΠ°', 'Π°ΠΏΠ΅Π»ΡΡΠΈΠ½');
response title=Response
ββhasSubsequenβ―'Π°ΠΏΠ΅Π»ΡΡΠΈΠ½')ββ
β 0 β
ββββββββββββββββββββββββββββ
hasToken {#hasToken}
Introduced in: v20.1
Checks if the given token is present in the haystack.
A token is defined as the longest possible sub-sequence of consecutive characters
[0-9A-Za-z_]
, i.e. numbers, ASCII letters and underscore.
Syntax
sql
hasToken(haystack, token)
Arguments
haystack
β String to be searched.
String
token
β Token to search for.
const String
Returned value
Returns
1
if the token is found,
0
otherwise.
UInt8
Examples
Token search
sql title=Query
SELECT hasToken('clickhouse test', 'test')
response title=Response
ββhasToken('clickhouse test', 'test')ββ
β 1 β
βββββββββββββββββββββββββββββββββββββββ
hasTokenCaseInsensitive {#hasTokenCaseInsensitive}
Introduced in: v
Performs case insensitive lookup of needle in haystack using tokenbf_v1 index.
Syntax
```sql
```
Arguments
None.
Returned value
Examples
hasTokenCaseInsensitiveOrNull {#hasTokenCaseInsensitiveOrNull}
Introduced in: v | {"source_file": "string-search-functions.md"} | [
-0.04682248830795288,
-0.043128687888383865,
0.004555027931928635,
0.028872547671198845,
-0.09819311648607254,
-0.0106408866122365,
0.06420674920082092,
0.03655456751585007,
-0.05231630429625511,
-0.06435190886259079,
-0.016042323783040047,
-0.06984219700098038,
0.09315796941518784,
-0.037... |
a84fc4ab-9bc6-4c5a-a6b6-a034eebbd96d | Syntax
```sql
```
Arguments
None.
Returned value
Examples
hasTokenCaseInsensitiveOrNull {#hasTokenCaseInsensitiveOrNull}
Introduced in: v
Performs case insensitive lookup of needle in haystack using tokenbf_v1 index. Returns null if needle is ill-formed.
Syntax
```sql
```
Arguments
None.
Returned value
Examples
hasTokenOrNull {#hasTokenOrNull}
Introduced in: v20.1
Like
hasToken
but returns null if token is ill-formed.
Syntax
sql
hasTokenOrNull(haystack, token)
Arguments
haystack
β String to be searched. Must be constant.
String
token
β Token to search for.
const String
Returned value
Returns
1
if the token is found,
0
otherwise, null if token is ill-formed.
Nullable(UInt8)
Examples
Usage example
sql title=Query
SELECT hasTokenOrNull('apple banana cherry', 'ban ana');
response title=Response
ββhasTokenOrNuβ― 'ban ana')ββ
β α΄Ία΅α΄Έα΄Έ β
ββββββββββββββββββββββββββββ
ilike {#ilike}
Introduced in: v20.6
Like
like
but searches case-insensitively.
Syntax
sql
ilike(haystack, pattern)
-- haystack ILIKE pattern
Arguments
haystack
β String in which the search is performed.
String
or
FixedString
pattern
β LIKE pattern to match against.
String
Returned value
Returns
1
if the string matches the LIKE pattern (case-insensitive), otherwise
0
.
UInt8
Examples
Usage example
sql title=Query
SELECT ilike('ClickHouse', '%house%');
response title=Response
ββilike('ClickHouse', '%house%')ββ
β 1 β
ββββββββββββββββββββββββββββββββββ
like {#like}
Introduced in: v1.1
Returns whether string
haystack
matches the
LIKE
expression
pattern
.
A
LIKE
expression can contain normal characters and the following metasymbols:
%
indicates an arbitrary number of arbitrary characters (including zero characters).
_
indicates a single arbitrary character.
\
is for escaping literals
%
,
_
and
\
.
Matching is based on UTF-8, e.g.
_
matches the Unicode code point
Β₯
which is represented in UTF-8 using two bytes.
If the haystack or the
LIKE
expression are not valid UTF-8, the behavior is undefined.
No automatic Unicode normalization is performed, you can use the
normalizeUTF8*
functions for that.
To match against literal
%
,
_
and
\
(which are
LIKE
metacharacters), prepend them with a backslash:
\%
,
\_
and
\\
.
The backslash loses its special meaning (i.e. is interpreted literally) if it prepends a character different than
%
,
_
or
\
.
:::note
ClickHouse requires backslashes in strings
to be quoted as well
, so you would actually need to write
\\%
,
\\_
and
\\\\
.
:::
For
LIKE
expressions of the form
%needle%
, the function is as fast as the
position
function.
All other LIKE expressions are internally converted to a regular expression and executed with a performance similar to function
match
.
Syntax | {"source_file": "string-search-functions.md"} | [
-0.03132020682096481,
-0.005682704504579306,
0.03272438421845436,
0.03511098027229309,
-0.03203672170639038,
0.025726936757564545,
0.025439729914069176,
0.053917817771434784,
-0.085848368704319,
0.0024475085083395243,
0.04292718693614006,
-0.0844738781452179,
0.06465435773134232,
-0.056945... |
23989e76-4075-4aca-a318-93fd01a17d26 | Syntax
sql
like(haystack, pattern)
-- haystack LIKE pattern
Arguments
haystack
β String in which the search is performed.
String
or
FixedString
pattern
β
LIKE
pattern to match against. Can contain
%
(matches any number of characters),
_
(matches single character), and
\
for escaping.
String
Returned value
Returns
1
if the string matches the
LIKE
pattern, otherwise
0
.
UInt8
Examples
Usage example
sql title=Query
SELECT like('ClickHouse', '%House');
response title=Response
ββlike('ClickHouse', '%House')ββ
β 1 β
ββββββββββββββββββββββββββββββββ
Single character wildcard
sql title=Query
SELECT like('ClickHouse', 'Click_ouse');
response title=Response
ββlike('ClickHβ―lick_ouse')ββ
β 1 β
ββββββββββββββββββββββββββββ
Non-matching pattern
sql title=Query
SELECT like('ClickHouse', '%SQL%');
response title=Response
ββlike('ClickHouse', '%SQL%')ββ
β 0 β
βββββββββββββββββββββββββββββββ
locate {#locate}
Introduced in: v18.16
Like
position
but with arguments
haystack
and
locate
switched.
:::note Version dependent behavior
The behavior of this function depends on the ClickHouse version:
- in versions < v24.3,
locate
was an alias of function
position
and accepted arguments
(haystack, needle[, start_pos])
.
- in versions >= 24.3,
locate
is an individual function (for better compatibility with MySQL) and accepts arguments
(needle, haystack[, start_pos])
.
The previous behavior can be restored using setting
function_locate_has_mysql_compatible_argument_order = false
.
:::
Syntax
sql
locate(needle, haystack[, start_pos])
Arguments
needle
β Substring to be searched.
String
haystack
β String in which the search is performed.
String
or
Enum
start_pos
β Optional. Position (1-based) in
haystack
at which the search starts.
UInt
Returned value
Returns starting position in bytes and counting from 1, if the substring was found,
0
, if the substring was not found.
UInt64
Examples
Basic usage
sql title=Query
SELECT locate('ca', 'abcabc')
response title=Response
ββlocate('ca', 'abcabc')ββ
β 3 β
ββββββββββββββββββββββββββ
match {#match}
Introduced in: v1.1
Checks if a provided string matches the provided regular expression pattern.
This function uses the RE2 regular expression library. Please refer to
re2
for supported syntax.
Matching works under UTF-8 assumptions, e.g.
Β₯
uses two bytes internally but matching treats it as a single codepoint.
The regular expression must not contain NULL bytes.
If the haystack or the pattern are not valid UTF-8, the behavior is undefined.
Unlike re2's default behavior,
.
matches line breaks. To disable this, prepend the pattern with
(?-s)
.
The pattern is automatically anchored at both ends (as if the pattern started with '^' and ended with '$'). | {"source_file": "string-search-functions.md"} | [
-0.045052431523799896,
-0.040771037340164185,
0.07542204856872559,
0.051933709532022476,
-0.046120788902044296,
-0.01708349958062172,
0.039498090744018555,
-0.03801489248871803,
-0.015698062255978584,
-0.03275559842586517,
0.0891919657588005,
-0.021326059475541115,
0.08363129198551178,
-0.... |
05213221-d30a-4dbc-b13f-5378525a287e | The pattern is automatically anchored at both ends (as if the pattern started with '^' and ended with '$').
If you only like to find substrings, you can use functions
like
or
position
instead - they work much faster than this function.
Alternative operator syntax:
haystack REGEXP pattern
.
Syntax
sql
match(haystack, pattern)
Aliases
:
REGEXP_MATCHES
Arguments
haystack
β String in which the pattern is searched.
String
pattern
β Regular expression pattern.
const String
Returned value
Returns
1
if the pattern matches,
0
otherwise.
UInt8
Examples
Basic pattern matching
sql title=Query
SELECT match('Hello World', 'Hello.*')
response title=Response
ββmatch('Hello World', 'Hello.*')ββ
β 1 β
βββββββββββββββββββββββββββββββββββ
Pattern not matching
sql title=Query
SELECT match('Hello World', 'goodbye.*')
response title=Response
ββmatch('Hello World', 'goodbye.*')ββ
β 0 β
βββββββββββββββββββββββββββββββββββββ
multiFuzzyMatchAllIndices {#multiFuzzyMatchAllIndices}
Introduced in: v20.1
Like
multiFuzzyMatchAny
but returns the array of all indices in any order that match the haystack within a constant
edit distance
.
Syntax
sql
multiFuzzyMatchAllIndices(haystack, distance, [pattern1, pattern2, ..., patternN])
Arguments
haystack
β String in which the search is performed.
String
distance
β The maximum edit distance for fuzzy matching.
UInt8
pattern
β Array of patterns to match against.
Array(String)
Returned value
Returns an array of all indices (starting from 1) that match the haystack within the specified edit distance in any order. Returns an empty array if no matches are found.
Array(UInt64)
Examples
Usage example
sql title=Query
SELECT multiFuzzyMatchAllIndices('ClickHouse', 2, ['ClickHouse', 'ClckHouse', 'ClickHose', 'House']);
response title=Response
ββmultiFuzzyMaβ―, 'House'])ββ
β [3,1,4,2] β
ββββββββββββββββββββββββββββ
multiFuzzyMatchAny {#multiFuzzyMatchAny}
Introduced in: v20.1
Like
multiMatchAny
but returns 1 if any pattern matches the haystack within a constant
edit distance
.
This function relies on the experimental feature of
hyperscan
library, and can be slow for some edge cases.
The performance depends on the edit distance value and patterns used, but it's always more expensive compared to non-fuzzy variants.
:::note
multiFuzzyMatch*()
function family do not support UTF-8 regular expressions (it treats them as a sequence of bytes) due to restrictions of hyperscan.
:::
Syntax
sql
multiFuzzyMatchAny(haystack, distance, [pattern1, pattern2, ..., patternN])
Arguments
haystack
β String in which the search is performed.
String
distance
β The maximum edit distance for fuzzy matching.
UInt8
pattern
β Optional. An array of patterns to match against.
Array(String)
Returned value | {"source_file": "string-search-functions.md"} | [
-0.05850881710648537,
0.006050532683730125,
0.06757212430238724,
0.022541549056768417,
-0.03261775150895119,
-0.06467028707265854,
0.027754027396440506,
-0.04441514611244202,
-0.009457527659833431,
-0.018139205873012543,
-0.003333383472636342,
-0.035634785890579224,
0.013637731783092022,
-... |
49486d4e-4a13-433a-9d14-037c40becfb6 | distance
β The maximum edit distance for fuzzy matching.
UInt8
pattern
β Optional. An array of patterns to match against.
Array(String)
Returned value
Returns
1
if any pattern matches the haystack within the specified edit distance, otherwise
0
.
UInt8
Examples
Usage example
sql title=Query
SELECT multiFuzzyMatchAny('ClickHouse', 2, ['ClickHouse', 'ClckHouse', 'ClickHose']);
response title=Response
ββmultiFuzzyMaβ―lickHose'])ββ
β 1 β
ββββββββββββββββββββββββββββ
multiFuzzyMatchAnyIndex {#multiFuzzyMatchAnyIndex}
Introduced in: v20.1
Like
multiFuzzyMatchAny
but returns any index that matches the haystack within a constant
edit distance
.
Syntax
sql
multiFuzzyMatchAnyIndex(haystack, distance, [pattern1, pattern2, ..., patternn])
Arguments
haystack
β String in which the search is performed.
String
distance
β The maximum edit distance for fuzzy matching.
UInt8
pattern
β Array of patterns to match against.
Array(String)
Returned value
Returns the index (starting from 1) of any pattern that matches the haystack within the specified edit distance, otherwise
0
.
UInt64
Examples
Usage example
sql title=Query
SELECT multiFuzzyMatchAnyIndex('ClickHouse', 2, ['ClckHouse', 'ClickHose', 'ClickHouse']);
response title=Response
ββmultiFuzzyMaβ―ickHouse'])ββ
β 2 β
ββββββββββββββββββββββββββββ
multiMatchAllIndices {#multiMatchAllIndices}
Introduced in: v20.1
Like
multiMatchAny
but returns the array of all indices that match the haystack in any order.
Syntax
sql
multiMatchAllIndices(haystack, [pattern1, pattern2, ..., patternn])
Arguments
haystack
β String in which the search is performed.
String
pattern
β Regular expressions to match against.
String
Returned value
Array of all indices (starting from 1) that match the haystack in any order. Returns an empty array if no matches are found.
Array(UInt64)
Examples
Usage example
sql title=Query
SELECT multiMatchAllIndices('ClickHouse', ['[0-9]', 'House', 'Click', 'ouse']);
response title=Response
ββmultiMatchAlβ―', 'ouse'])ββ
β [3, 2, 4] β
ββββββββββββββββββββββββββββ
multiMatchAny {#multiMatchAny}
Introduced in: v20.1
Check if at least one of multiple regular expression patterns matches a haystack.
If you only want to search multiple substrings in a string, you can use function
multiSearchAny
instead - it works much faster than this function.
Syntax
sql
multiMatchAny(haystack, pattern1[, pattern2, ...])
Arguments
haystack
β String in which patterns are searched.
String
pattern1[, pattern2, ...]
β An array of one or more regular expression patterns.
Array(String)
Returned value
Returns
1
if any pattern matches,
0
otherwise.
UInt8
Examples
Multiple pattern matching
sql title=Query
SELECT multiMatchAny('Hello World', ['Hello.*', 'foo.*']) | {"source_file": "string-search-functions.md"} | [
-0.04012593999505043,
-0.051153820008039474,
-0.0269731804728508,
-0.029126793146133423,
-0.09929332137107849,
0.010471667163074017,
-0.0017675544368103147,
0.010967622511088848,
-0.05481046810746193,
-0.03616277128458023,
-0.004575215745717287,
-0.0561961866915226,
0.0503346286714077,
-0.... |
99a7a521-26b8-45a0-91a4-d323d3e36d8a | Returned value
Returns
1
if any pattern matches,
0
otherwise.
UInt8
Examples
Multiple pattern matching
sql title=Query
SELECT multiMatchAny('Hello World', ['Hello.*', 'foo.*'])
response title=Response
ββmultiMatchAny('Hello World', ['Hello.*', 'foo.*'])ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
No patterns match
sql title=Query
SELECT multiMatchAny('Hello World', ['goodbye.*', 'foo.*'])
response title=Response
ββmultiMatchAny('Hello World', ['goodbye.*', 'foo.*'])ββ
β 0 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
multiMatchAnyIndex {#multiMatchAnyIndex}
Introduced in: v20.1
Like
multiMatchAny
but returns any index that matches the haystack.
Syntax
sql
multiMatchAnyIndex(haystack, [pattern1, pattern2, ..., patternn])
Arguments
haystack
β String in which the search is performed.
String
pattern
β Regular expressions to match against.
Array(String)
Returned value
Returns the index (starting from 1) of the first pattern that matches, or 0 if no match is found.
UInt64
Examples
Usage example
sql title=Query
SELECT multiMatchAnyIndex('ClickHouse', ['[0-9]', 'House', 'Click']);
response title=Response
ββmultiMatchAnβ―, 'Click'])ββ
β 3 β
ββββββββββββββββββββββββββββ
multiSearchAllPositions {#multiSearchAllPositions}
Introduced in: v20.1
Like
position
but returns an array of positions (in bytes, starting at 1) for multiple
needle
substrings in a
haystack
string.
All
multiSearch*()
functions only support up to 2^8 needles.
Syntax
sql
multiSearchAllPositions(haystack, needle1[, needle2, ...])
Arguments
haystack
β String in which the search is performed.
String
needle1[, needle2, ...]
β An array of one or more substrings to be searched.
Array(String)
Returned value
Returns array of the starting position in bytes and counting from 1, if the substring was found,
0
, if the substring was not found.
Array(UInt64)
Examples
Multiple needle search
sql title=Query
SELECT multiSearchAllPositions('Hello, World!', ['hello', '!', 'world'])
response title=Response
ββmultiSearchAllPositions('Hello, World!', ['hello', '!', 'world'])ββ
β [0,13,0] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
multiSearchAllPositionsCaseInsensitive {#multiSearchAllPositionsCaseInsensitive}
Introduced in: v20.1
Like
multiSearchAllPositions
but ignores case.
Syntax
sql
multiSearchAllPositionsCaseInsensitive(haystack, needle1[, needle2, ...])
Arguments
haystack
β String in which the search is performed.
String
needle1[, needle2, ...]
β An array of one or more substrings to be searched.
Array(String)
Returned value | {"source_file": "string-search-functions.md"} | [
-0.043701354414224625,
-0.026445994153618813,
0.04441019520163536,
0.0055723912082612514,
-0.04365259036421776,
-0.06125730276107788,
0.052780881524086,
-0.04057068005204201,
-0.039435457438230515,
-0.022380471229553223,
0.009341385215520859,
-0.09056191146373749,
0.04055078327655792,
-0.0... |
d055f570-763f-479c-b3d6-8448dd1f4410 | Arguments
haystack
β String in which the search is performed.
String
needle1[, needle2, ...]
β An array of one or more substrings to be searched.
Array(String)
Returned value
Returns array of the starting position in bytes and counting from 1 (if the substring was found),
0
if the substring was not found.
Array(UInt64)
Examples
Case insensitive multi-search
sql title=Query
SELECT multiSearchAllPositionsCaseInsensitive('ClickHouse',['c','h'])
response title=Response
ββmultiSearchAβ―['c', 'h'])ββ
β [1,6] β
ββββββββββββββββββββββββββββ
multiSearchAllPositionsCaseInsensitiveUTF8 {#multiSearchAllPositionsCaseInsensitiveUTF8}
Introduced in: v20.1
Like
multiSearchAllPositionsUTF8
but ignores case.
Syntax
sql
multiSearchAllPositionsCaseInsensitiveUTF8(haystack, [needle1, needle2, ..., needleN])
Arguments
haystack
β UTF-8 encoded string in which the search is performed.
String
needle
β UTF-8 encoded substrings to be searched.
Array(String)
Returned value
Array of the starting position in bytes and counting from 1 (if the substring was found). Returns 0 if the substring was not found.
Array
Examples
Case-insensitive UTF-8 search
sql title=Query
SELECT multiSearchAllPositionsCaseInsensitiveUTF8('ΠΠ΄ΡΠ°Π²ΡΡΠ²ΡΠΉ, ΠΌΠΈΡ!', ['Π·Π΄ΡΠ°Π²ΡΡΠ²ΡΠΉ', 'ΠΠΠ ']);
response title=Response
ββmultiSearchAβ―ΠΉ', 'ΠΠΠ '])ββ
β [1, 13] β
ββββββββββββββββββββββββββββ
multiSearchAllPositionsUTF8 {#multiSearchAllPositionsUTF8}
Introduced in: v20.1
Like
multiSearchAllPositions
but assumes
haystack
and the
needle
substrings are UTF-8 encoded strings.
Syntax
sql
multiSearchAllPositionsUTF8(haystack, needle1[, needle2, ...])
Arguments
haystack
β UTF-8 encoded string in which the search is performed.
String
needle1[, needle2, ...]
β An array of UTF-8 encoded substrings to be searched.
Array(String)
Returned value
Returns array of the starting position in bytes and counting from 1 (if the substring was found),
0
if the substring was not found.
Array
Examples
UTF-8 multi-search
sql title=Query
SELECT multiSearchAllPositionsUTF8('ClickHouse',['C','H'])
response title=Response
ββmultiSearchAllPositionsUTF8('ClickHouse', ['C', 'H'])ββ
β [1,6] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
multiSearchAny {#multiSearchAny}
Introduced in: v20.1
Checks if at least one of a number of needle strings matches the haystack string.
Functions
multiSearchAnyCaseInsensitive
,
multiSearchAnyUTF8
and
multiSearchAnyCaseInsensitiveUTF8
provide case-insensitive and/or UTF-8 variants of this function.
Syntax
sql
multiSearchAny(haystack, needle1[, needle2, ...])
Arguments
haystack
β String in which the search is performed.
String
needle1[, needle2, ...]
β An array of substrings to be searched.
Array(String)
Returned value | {"source_file": "string-search-functions.md"} | [
0.029795004054903984,
-0.034997887909412384,
0.02879980020225048,
-0.01679445244371891,
-0.05298503115773201,
-0.028039587661623955,
0.03606598079204559,
0.0043239169754087925,
-0.044314317405223846,
-0.016561053693294525,
-0.011237862519919872,
-0.03178829327225685,
0.09978239983320236,
-... |
ce4e8b83-e059-4a85-b4bf-130cfc23863f | Arguments
haystack
β String in which the search is performed.
String
needle1[, needle2, ...]
β An array of substrings to be searched.
Array(String)
Returned value
Returns
1
, if there was at least one match, otherwise
0
, if there was not at least one match.
UInt8
Examples
Any match search
sql title=Query
SELECT multiSearchAny('ClickHouse',['C','H'])
response title=Response
ββmultiSearchAny('ClickHouse', ['C', 'H'])ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββββββββ
multiSearchAnyCaseInsensitive {#multiSearchAnyCaseInsensitive}
Introduced in: v20.1
Like
multiSearchAny
but ignores case.
Syntax
sql
multiSearchAnyCaseInsensitive(haystack, [needle1, needle2, ..., needleN])
Arguments
haystack
β String in which the search is performed.
String
needle
β Substrings to be searched.
Array(String)
Returned value
Returns
1
, if there was at least one case-insensitive match, otherwise
0
, if there was not at least one case-insensitive match.
UInt8
Examples
Case insensitive search
sql title=Query
SELECT multiSearchAnyCaseInsensitive('ClickHouse',['c','h'])
response title=Response
ββmultiSearchAnyCaseInsensitive('ClickHouse', ['c', 'h'])ββ
β 1 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
multiSearchAnyCaseInsensitiveUTF8 {#multiSearchAnyCaseInsensitiveUTF8}
Introduced in: v20.1
Like
multiSearchAnyUTF8
but ignores case.
Syntax
sql
multiSearchAnyCaseInsensitiveUTF8(haystack, [needle1, needle2, ..., needleN])
Arguments
haystack
β UTF-8 string in which the search is performed.
String
needle
β UTF-8 substrings to be searched.
Array(String)
Returned value
Returns
1
, if there was at least one case-insensitive match, otherwise
0
, if there was not at least one case-insensitive match.
UInt8
Examples
Given a UTF-8 string 'ΠΠ΄ΡΠ°Π²ΡΡΠ²ΡΠΉΡΠ΅', check if character 'Π·' (lowercase) is present
sql title=Query
SELECT multiSearchAnyCaseInsensitiveUTF8('ΠΠ΄ΡΠ°Π²ΡΡΠ²ΡΠΉΡΠ΅',['Π·'])
response title=Response
ββmultiSearchAβ―ΡΠ΅', ['Π·'])ββ
β 1 β
ββββββββββββββββββββββββββββ
multiSearchAnyUTF8 {#multiSearchAnyUTF8}
Introduced in: v20.1
Like
multiSearchAny
but assumes
haystack
and the
needle
substrings are UTF-8 encoded strings.
Syntax
sql
multiSearchAnyUTF8(haystack, [needle1, needle2, ..., needleN])
Arguments
haystack
β UTF-8 string in which the search is performed.
String
needle
β UTF-8 substrings to be searched.
Array(String)
Returned value
Returns
1
, if there was at least one match, otherwise
0
, if there was not at least one match.
UInt8
Examples
Given 'δ½ ε₯½οΌδΈη' ('Hello, world') as a UTF-8 string, check if there are any δ½ or η characters in the string
sql title=Query
SELECT multiSearchAnyUTF8('δ½ ε₯½οΌδΈη', ['δ½ ', 'η']) | {"source_file": "string-search-functions.md"} | [
0.0023544616997241974,
-0.026531167328357697,
0.004001914989203215,
0.006279596593230963,
-0.035664163529872894,
-0.023097502067685127,
0.029310772195458412,
-0.018984723836183548,
-0.06470612436532974,
-0.06330736726522446,
-0.0013411248801276088,
-0.0894128754734993,
0.09135119616985321,
... |
2e78ec0a-a047-403a-b906-76442841848a | Examples
Given 'δ½ ε₯½οΌδΈη' ('Hello, world') as a UTF-8 string, check if there are any δ½ or η characters in the string
sql title=Query
SELECT multiSearchAnyUTF8('δ½ ε₯½οΌδΈη', ['δ½ ', 'η'])
response title=Response
ββmultiSearchAβ―δ½ ', 'η'])ββ
β 1 β
ββββββββββββββββββββββββββββ
multiSearchFirstIndex {#multiSearchFirstIndex}
Introduced in: v20.1
Searches for multiple needle strings in a haystack string (case-sensitive) and returns the 1-based index of the first needle found.
Syntax
sql
multiSearchFirstIndex(haystack, [needle1, needle2, ..., needleN])
Arguments
haystack
β The string to search in.
String
needles
β Array of strings to search for.
Array(String)
Returned value
Returns the 1-based index (position in the needles array) of the first needle found in the haystack. Returns 0 if no needles are found. The search is case-sensitive.
UInt64
Examples
Usage example
sql title=Query
SELECT multiSearchFirstIndex('ClickHouse Database', ['Click', 'Database', 'Server']);
response title=Response
ββmultiSearchFβ― 'Server'])ββ
β 1 β
ββββββββββββββββββββββββββββ
Case-sensitive behavior
sql title=Query
SELECT multiSearchFirstIndex('ClickHouse Database', ['CLICK', 'Database', 'Server']);
response title=Response
ββmultiSearchFβ― 'Server'])ββ
β 2 β
ββββββββββββββββββββββββββββ
No match found
sql title=Query
SELECT multiSearchFirstIndex('Hello World', ['goodbye', 'test']);
response title=Response
ββmultiSearchFβ―', 'test'])ββ
β 0 β
ββββββββββββββββββββββββββββ
multiSearchFirstIndexCaseInsensitive {#multiSearchFirstIndexCaseInsensitive}
Introduced in: v20.1
Returns the index
i
(starting from 1) of the leftmost found needle_i in the string
haystack
and 0 otherwise.
Ignores case.
Syntax
sql
multiSearchFirstIndexCaseInsensitive(haystack, [needle1, needle2, ..., needleN]
Arguments
haystack
β String in which the search is performed.
String
needle
β Substrings to be searched.
Array(String)
Returned value
Returns the index (starting from 1) of the leftmost found needle. Otherwise
0
, if there was no match.
UInt8
Examples
Usage example
sql title=Query
SELECT multiSearchFirstIndexCaseInsensitive('hElLo WoRlD', ['World', 'Hello']);
response title=Response
ββmultiSearchFβ―, 'Hello'])ββ
β 1 β
ββββββββββββββββββββββββββββ
multiSearchFirstIndexCaseInsensitiveUTF8 {#multiSearchFirstIndexCaseInsensitiveUTF8}
Introduced in: v20.1
Searches for multiple needle strings in a haystack string, case-insensitively with UTF-8 encoding support, and returns the 1-based index of the first needle found.
Syntax
sql
multiSearchFirstIndexCaseInsensitiveUTF8(haystack, [needle1, needle2, ..., needleN])
Arguments
haystack
β The string to search in.
String
needles
β Array of strings to search for.
Array(String)
Returned value | {"source_file": "string-search-functions.md"} | [
0.0079237325116992,
-0.03985562175512314,
0.05445386469364166,
0.0067724063992500305,
-0.024387024343013763,
0.007247339468449354,
0.039996858686208725,
-0.0349210649728775,
0.006578734610229731,
-0.036243587732315063,
0.06690146774053574,
-0.04443629831075668,
0.12733401358127594,
-0.0348... |
bc77adab-00f2-4035-a7dd-263dd53dfaea | Arguments
haystack
β The string to search in.
String
needles
β Array of strings to search for.
Array(String)
Returned value
Returns the 1-based index (position in the needles array) of the first needle found in the haystack. Returns 0 if no needles are found. The search is case-insensitive and respects UTF-8 character encoding.
UInt64
Examples
Usage example
sql title=Query
SELECT multiSearchFirstIndexCaseInsensitiveUTF8('ClickHouse Database', ['CLICK', 'data', 'server']);
response title=Response
ββmultiSearchFβ― 'server'])ββ
β 1 β
ββββββββββββββββββββββββββββ
UTF-8 case handling
sql title=Query
SELECT multiSearchFirstIndexCaseInsensitiveUTF8('ΠΡΠΈΠ²Π΅Ρ ΠΠΈΡ', ['ΠΌΠΈΡ', 'ΠΠ ΠΠΠΠ’']);
response title=Response
ββmultiSearchFβ― 'ΠΠ ΠΠΠΠ’'])ββ
β 1 β
ββββββββββββββββββββββββββββ
No match found
sql title=Query
SELECT multiSearchFirstIndexCaseInsensitiveUTF8('Hello World', ['goodbye', 'test']);
response title=Response
ββmultiSearchFβ―', 'test'])ββ
β 0 β
ββββββββββββββββββββββββββββ
multiSearchFirstIndexUTF8 {#multiSearchFirstIndexUTF8}
Introduced in: v20.1
Returns the index
i
(starting from 1) of the leftmost found needle_i in the string
haystack
and 0 otherwise.
Assumes
haystack
and
needle
are UTF-8 encoded strings.
Syntax
sql
multiSearchFirstIndexUTF8(haystack, [needle1, needle2, ..., needleN])
Arguments
haystack
β UTF-8 string in which the search is performed.
String
needle
β Array of UTF-8 substrings to be searched.
Array(String)
Returned value
Returns the index (starting from 1) of the leftmost found needle. Otherwise 0, if there was no match.
UInt8
Examples
Usage example
sql title=Query
SELECT multiSearchFirstIndexUTF8('ΠΠ΄ΡΠ°Π²ΡΡΠ²ΡΠΉΡΠ΅ ΠΌΠΈΡ', ['ΠΌΠΈΡ', 'Π·Π΄ΡΠ°Π²ΡΡΠ²ΡΠΉΡΠ΅']);
response title=Response
ββmultiSearchFβ―Π²ΡΡΠ²ΡΠΉΡΠ΅'])ββ
β 1 β
ββββββββββββββββββββββββββββ
multiSearchFirstPosition {#multiSearchFirstPosition}
Introduced in: v20.1
Like
position
but returns the leftmost offset in a
haystack
string which matches any of multiple
needle
strings.
Functions
multiSearchFirstPositionCaseInsensitive
,
multiSearchFirstPositionUTF8
and
multiSearchFirstPositionCaseInsensitiveUTF8
provide case-insensitive and/or UTF-8 variants of this function.
Syntax
sql
multiSearchFirstPosition(haystack, needle1[, needle2, ...])
Arguments
haystack
β String in which the search is performed.
String
needle1[, needle2, ...]
β An array of one or more substrings to be searched.
Array(String)
Returned value
Returns the leftmost offset in a
haystack
string which matches any of multiple
needle
strings, otherwise
0
, if there was no match.
UInt64
Examples
First position search
sql title=Query
SELECT multiSearchFirstPosition('Hello World',['llo', 'Wor', 'ld']) | {"source_file": "string-search-functions.md"} | [
0.030156167224049568,
-0.020786525681614876,
-0.010423661209642887,
0.020944323390722275,
-0.08179737627506256,
-0.02196994237601757,
0.0278458371758461,
-0.026692448183894157,
-0.014766042120754719,
-0.04515629634261131,
0.033112477511167526,
-0.0364055372774601,
0.08470603823661804,
-0.0... |
5fa6aaa8-9343-4d95-9c54-8c4b1f428f53 | Examples
First position search
sql title=Query
SELECT multiSearchFirstPosition('Hello World',['llo', 'Wor', 'ld'])
response title=Response
ββmultiSearchFirstPosition('Hello World', ['llo', 'Wor', 'ld'])ββ
β 3 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
multiSearchFirstPositionCaseInsensitive {#multiSearchFirstPositionCaseInsensitive}
Introduced in: v20.1
Like
multiSearchFirstPosition
but ignores case.
Syntax
sql
multiSearchFirstPositionCaseInsensitive(haystack, [needle1, needle2, ..., needleN])
Arguments
haystack
β String in which the search is performed.
String
needle
β Array of substrings to be searched.
Array(String)
Returned value
Returns the leftmost offset in a
haystack
string which matches any of multiple
needle
strings. Returns
0
, if there was no match.
UInt64
Examples
Case insensitive first position
sql title=Query
SELECT multiSearchFirstPositionCaseInsensitive('HELLO WORLD',['wor', 'ld', 'ello'])
response title=Response
ββmultiSearchFirstPositionCaseInsensitive('HELLO WORLD', ['wor', 'ld', 'ello'])ββ
β 2 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
multiSearchFirstPositionCaseInsensitiveUTF8 {#multiSearchFirstPositionCaseInsensitiveUTF8}
Introduced in: v20.1
Like
multiSearchFirstPosition
but assumes
haystack
and
needle
to be UTF-8 strings and ignores case.
Syntax
sql
multiSearchFirstPositionCaseInsensitiveUTF8(haystack, [needle1, needle2, ..., needleN])
Arguments
haystack
β UTF-8 string in which the search is performed.
String
needle
β Array of UTF-8 substrings to be searched.
Array(String)
Returned value
Returns the leftmost offset in a
haystack
string which matches any of multiple
needle
strings, ignoring case. Returns
0
, if there was no match.
UInt64
Examples
Find the leftmost offset in UTF-8 string 'ΠΠ΄ΡΠ°Π²ΡΡΠ²ΡΠΉ, ΠΌΠΈΡ' ('Hello, world') which matches any of the given needles
sql title=Query
SELECT multiSearchFirstPositionCaseInsensitiveUTF8('ΠΠ΄ΡΠ°Π²ΡΡΠ²ΡΠΉ, ΠΌΠΈΡ', ['ΠΠΠ ', 'Π²ΡΡ', 'ΠΠ΄ΡΠ°'])
response title=Response
ββmultiSearchFirstPositionCaseInsensitiveUTF8('ΠΠ΄ΡΠ°Π²ΡΡΠ²ΡΠΉ, ΠΌΠΈΡ', ['ΠΌΠΈΡ', 'Π²ΡΡ', 'ΠΠ΄ΡΠ°'])ββ
β 3 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
multiSearchFirstPositionUTF8 {#multiSearchFirstPositionUTF8}
Introduced in: v20.1
Like
multiSearchFirstPosition
but assumes
haystack
and
needle
to be UTF-8 strings.
Syntax
sql
multiSearchFirstPositionUTF8(haystack, [needle1, needle2, ..., needleN])
Arguments
haystack
β UTF-8 string in which the search is performed.
String
needle
β Array of UTF-8 substrings to be searched.
Array(String)
Returned value | {"source_file": "string-search-functions.md"} | [
0.02311423420906067,
-0.03563643991947174,
0.053575195372104645,
-0.013898217119276524,
-0.0631515383720398,
-0.03831371292471886,
-0.00025203771656379104,
0.018838614225387573,
-0.06163022294640541,
-0.03523372486233711,
0.05841771885752678,
-0.039465826004743576,
0.07608703523874283,
-0.... |
07494472-bd07-4169-aeb5-741cd105caed | Arguments
haystack
β UTF-8 string in which the search is performed.
String
needle
β Array of UTF-8 substrings to be searched.
Array(String)
Returned value
Leftmost offset in a
haystack
string which matches any of multiple
needle
strings. Returns
0
, if there was no match.
UInt64
Examples
Find the leftmost offset in UTF-8 string 'ΠΠ΄ΡΠ°Π²ΡΡΠ²ΡΠΉ, ΠΌΠΈΡ' ('Hello, world') which matches any of the given needles
sql title=Query
SELECT multiSearchFirstPositionUTF8('ΠΠ΄ΡΠ°Π²ΡΡΠ²ΡΠΉ, ΠΌΠΈΡ',['ΠΌΠΈΡ', 'Π²ΡΡ', 'Π°Π²ΡΡ'])
response title=Response
ββmultiSearchFirstPositionUTF8('ΠΠ΄ΡΠ°Π²ΡΡΠ²ΡΠΉ, ΠΌΠΈΡ', ['ΠΌΠΈΡ', 'Π²ΡΡ', 'Π°Π²ΡΡ'])ββ
β 3 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ngramDistance {#ngramDistance}
Introduced in: v20.1
Calculates the 4-gram distance between two strings.
For this, it counts the symmetric difference between two multisets of 4-grams and normalizes it by the sum of their cardinalities.
The smaller the returned value, the more similar the strings are.
For case-insensitive search or/and in UTF8 format use functions
ngramDistanceCaseInsensitive
,
ngramDistanceUTF8
,
ngramDistanceCaseInsensitiveUTF8
.
Syntax
sql
ngramDistance(haystack, needle)
Arguments
haystack
β String for comparison.
String
needle
β String for comparison.
String
Returned value
Returns a Float32 number between
0
and
1
. The smaller the returned value, the more similar the strings are.
Float32
Examples
Calculate 4-gram distance
sql title=Query
SELECT ngramDistance('ClickHouse', 'ClickHouses')
response title=Response
ββngramDistance('ClickHouse', 'ClickHouses')ββ
β 0.1 β
ββββββββββββββββββββββββββββββββββββββββββββββ
ngramDistanceCaseInsensitive {#ngramDistanceCaseInsensitive}
Introduced in: v20.1
Provides a case-insensitive variant of
ngramDistance
.
Calculates the 4-gram distance between two strings, ignoring case.
The smaller the returned value, the more similar the strings are.
Syntax
sql
ngramDistanceCaseInsensitive(haystack, needle)
Arguments
haystack
β First comparison string.
String
needle
β Second comparison string.
String
Returned value
Returns a Float32 number between
0
and
1
.
Float32
Examples
Case-insensitive 4-gram distance
sql title=Query
SELECT ngramDistanceCaseInsensitive('ClickHouse','clickhouse')
response title=Response
ββngramDistanceCaseInsensitive('ClickHouse','clickhouse')ββ
β 0 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ngramDistanceCaseInsensitiveUTF8 {#ngramDistanceCaseInsensitiveUTF8}
Introduced in: v20.1 | {"source_file": "string-search-functions.md"} | [
0.015699179843068123,
-0.028553977608680725,
-0.021183796226978302,
-0.05849307402968407,
-0.06455180794000626,
-0.0014446749119088054,
0.03711329773068428,
0.03965715318918228,
0.004373552743345499,
-0.01945209689438343,
0.02190888114273548,
-0.05584574490785599,
0.049746815115213394,
-0.... |
978de5a8-739d-44a4-b9a2-7a9eac76648f | ngramDistanceCaseInsensitiveUTF8 {#ngramDistanceCaseInsensitiveUTF8}
Introduced in: v20.1
Provides a case-insensitive UTF-8 variant of
ngramDistance
.
Assumes that
needle
and
haystack
strings are UTF-8 encoded strings and ignores case.
Calculates the 3-gram distance between two UTF-8 strings, ignoring case.
The smaller the returned value, the more similar the strings are.
Syntax
sql
ngramDistanceCaseInsensitiveUTF8(haystack, needle)
Arguments
haystack
β First UTF-8 encoded comparison string.
String
needle
β Second UTF-8 encoded comparison string.
String
Returned value
Returns a Float32 number between
0
and
1
.
Float32
Examples
Case-insensitive UTF-8 3-gram distance
sql title=Query
SELECT ngramDistanceCaseInsensitiveUTF8('abcde','CDE')
response title=Response
ββngramDistanceCaseInsensitiveUTF8('abcde','CDE')ββ
β 0.5 β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
ngramDistanceUTF8 {#ngramDistanceUTF8}
Introduced in: v20.1
Provides a UTF-8 variant of
ngramDistance
.
Assumes that
needle
and
haystack
strings are UTF-8 encoded strings.
Calculates the 3-gram distance between two UTF-8 strings.
The smaller the returned value, the more similar the strings are.
Syntax
sql
ngramDistanceUTF8(haystack, needle)
Arguments
haystack
β First UTF-8 encoded comparison string.
String
needle
β Second UTF-8 encoded comparison string.
String
Returned value
Returns a Float32 number between
0
and
1
.
Float32
Examples
UTF-8 3-gram distance
sql title=Query
SELECT ngramDistanceUTF8('abcde','cde')
response title=Response
ββngramDistanceUTF8('abcde','cde')ββ
β 0.5 β
βββββββββββββββββββββββββββββββββββββ
ngramSearch {#ngramSearch}
Introduced in: v20.1
Checks if the 4-gram distance between two strings is less than or equal to a given threshold.
For case-insensitive search or/and in UTF8 format use functions
ngramSearchCaseInsensitive
,
ngramSearchUTF8
,
ngramSearchCaseInsensitiveUTF8
.
Syntax
sql
ngramSearch(haystack, needle)
Arguments
haystack
β String for comparison.
String
needle
β String for comparison.
String
Returned value
Returns
1
if the 4-gram distance between the strings is less than or equal to a threshold (
1.0
by default),
0
otherwise.
UInt8
Examples
Search using 4-grams
sql title=Query
SELECT ngramSearch('ClickHouse', 'Click')
response title=Response
ββngramSearch('ClickHouse', 'Click')ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββ
ngramSearchCaseInsensitive {#ngramSearchCaseInsensitive}
Introduced in: v20.1 | {"source_file": "string-search-functions.md"} | [
-0.03446608781814575,
-0.03740730136632919,
-0.004206659272313118,
-0.025361623615026474,
-0.07693025469779968,
-0.003994326572865248,
0.0240531824529171,
0.07051090896129608,
-0.0553617887198925,
-0.010751128196716309,
0.022432927042245865,
-0.08743096888065338,
0.037834811955690384,
0.02... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.