id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
85356965-84e4-4dfe-9ae8-fce33b508b9e | Trying ::1...
Connected to localhost (::1) port 8123 (#0)
GET /get_relative_path_static_handler HTTP/1.1
Host: localhost:8123
User-Agent: curl/7.47.0
Accept:
/
XXX:xxx
< HTTP/1.1 200 OK
< Date: Wed, 29 Apr 2020 04:18:31 GMT
< Connection: Keep-Alive
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","memory_usage":"8451671"}
<
Relative Path File
Connection #0 to host localhost left intact
```
redirect {#redirect}
redirect
will do a
302
redirect to
location
For instance this is how you can automatically add set user to
play
for ClickHouse play:
xml
<clickhouse>
<http_handlers>
<rule>
<methods>GET</methods>
<url>/play</url>
<handler>
<type>redirect</type>
<location>/play?user=play</location>
</handler>
</rule>
</http_handlers>
</clickhouse>
HTTP response headers {#http-response-headers}
ClickHouse allows you to configure custom HTTP response headers that can be applied to any kind of handler that can be configured. These headers can be set using the
http_response_headers
setting, which accepts key-value pairs representing header names and their values. This feature is particularly useful for implementing custom security headers, CORS policies, or any other HTTP header requirements across your ClickHouse HTTP interface.
For example, you can configure headers for:
- Regular query endpoints
- Web UI
- Health check.
It is also possible to specify
common_http_response_headers
. These will be applied to all http handlers defined in the configuration.
The headers will be included in the HTTP response for every configured handler.
In the example below, every server response will contain two custom headers:
X-My-Common-Header
and
X-My-Custom-Header
.
xml
<clickhouse>
<http_handlers>
<common_http_response_headers>
<X-My-Common-Header>Common header</X-My-Common-Header>
</common_http_response_headers>
<rule>
<methods>GET</methods>
<url>/ping</url>
<handler>
<type>ping</type>
<http_response_headers>
<X-My-Custom-Header>Custom indeed</X-My-Custom-Header>
</http_response_headers>
</handler>
</rule>
</http_handlers>
</clickhouse>
Valid JSON/XML response on exception during HTTP streaming {#valid-output-on-exception-http-streaming} | {"source_file": "http.md"} | [
0.01105574518442154,
0.004549206700176001,
-0.06379014998674393,
0.01726343296468258,
0.004788857419043779,
-0.09530482441186905,
-0.08622515201568604,
-0.018227310851216316,
-0.010434835217893124,
0.03590432181954384,
0.011543245054781437,
0.008609812706708908,
-0.005174090154469013,
-0.0... |
d33c6010-a5e8-4648-adbe-41855199b6c7 | Valid JSON/XML response on exception during HTTP streaming {#valid-output-on-exception-http-streaming}
While query execution occurs over HTTP an exception can happen when part of the data has already been sent. Usually an exception is sent to the client in plain text.
Even if some specific data format was used to output data and the output may become invalid in terms of specified data format.
To prevent it, you can use setting
http_write_exception_in_output_format
(disabled by default) that will tell ClickHouse to write an exception in specified format (currently supported for XML and JSON* formats).
Examples:
```bash
$ curl 'http://localhost:8123/?query=SELECT+number,+throwIf(number>3)+from+system.numbers+format+JSON+settings+max_block_size=1&http_write_exception_in_output_format=1'
{
"meta":
[
{
"name": "number",
"type": "UInt64"
},
{
"name": "throwIf(greater(number, 2))",
"type": "UInt8"
}
],
"data":
[
{
"number": "0",
"throwIf(greater(number, 2))": 0
},
{
"number": "1",
"throwIf(greater(number, 2))": 0
},
{
"number": "2",
"throwIf(greater(number, 2))": 0
}
],
"rows": 3,
"exception": "Code: 395. DB::Exception: Value passed to 'throwIf' function is non-zero: while executing 'FUNCTION throwIf(greater(number, 2) :: 2) -> throwIf(greater(number, 2)) UInt8 : 1'. (FUNCTION_THROW_IF_VALUE_IS_NON_ZERO) (version 23.8.1.1)"
}
```
```bash
$ curl 'http://localhost:8123/?query=SELECT+number,+throwIf(number>2)+from+system.numbers+format+XML+settings+max_block_size=1&http_write_exception_in_output_format=1'
number
UInt64
throwIf(greater(number, 2))
UInt8
0
0
1
0
2
0
3
Code: 395. DB::Exception: Value passed to 'throwIf' function is non-zero: while executing 'FUNCTION throwIf(greater(number, 2) :: 2) -> throwIf(greater(number, 2)) UInt8 : 1'. (FUNCTION_THROW_IF_VALUE_IS_NON_ZERO) (version 23.8.1.1)
``` | {"source_file": "http.md"} | [
0.0012340694665908813,
0.06598779559135437,
-0.043089620769023895,
-0.03742285445332527,
-0.017026396468281746,
-0.04312095791101456,
-0.06692197173833847,
0.011902199126780033,
0.020419254899024963,
-0.016406498849391937,
0.00048332984442822635,
0.00046995721640996635,
0.01797984540462494,
... |
6b9575f6-f8d6-47ff-8573-83f2b86dd85b | description: 'Overview of supported data formats for input and output in ClickHouse'
sidebar_label: 'View all formats...'
sidebar_position: 21
slug: /interfaces/formats
title: 'Formats for input and output data'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
Formats for input and output data {#formats-for-input-and-output-data}
ClickHouse supports most of the known text and binary data formats. This allows easy integration into almost any working
data pipeline to leverage the benefits of ClickHouse.
Input formats {#input-formats}
Input formats are used for:
- Parsing data provided to
INSERT
statements
- Performing
SELECT
queries from file-backed tables such as
File
,
URL
, or
HDFS
- Reading dictionaries
Choosing the right input format is crucial for efficient data ingestion in ClickHouse. With over 70 supported formats,
selecting the most performant option can significantly impact insert speed, CPU and memory usage, and overall system
efficiency. To help navigate these choices, we benchmarked ingestion performance across formats, revealing key takeaways:
The
Native
format is the most efficient input format
, offering the best compression, lowest
resource usage, and minimal server-side processing overhead.
Compression is essential
- LZ4 reduces data size with minimal CPU cost, while ZSTD offers higher compression at the
expense of additional CPU usage.
Pre-sorting has a moderate impact
, as ClickHouse already sorts efficiently.
Batching significantly improves efficiency
- larger batches reduce insert overhead and improve throughput.
For a deep dive into the results and best practices,
read the full
benchmark analysis
.
For the full test results, explore the
FastFormats
online dashboard.
Output formats {#output-formats}
Formats supported for output are used for:
- Arranging the results of a
SELECT
query
- Performing
INSERT
operations into file-backed tables
Formats overview {#formats-overview}
The supported formats are: | {"source_file": "formats.md"} | [
-0.0028056511655449867,
-0.0046039666049182415,
-0.03940486162900925,
0.022529104724526405,
0.045011814683675766,
-0.028072981163859367,
-0.0077867209911346436,
-0.0009160850895568728,
-0.021163754165172577,
0.01669805496931076,
-0.025147348642349243,
0.002561356406658888,
0.0378081835806369... |
58c31f14-922e-4560-97f8-181d7f265003 | | Format | Input | Output |
|------------------------------------------------------------------------------------------------------------|-----|-------|
|
TabSeparated
| β | β |
|
TabSeparatedRaw
| β | β |
|
TabSeparatedWithNames
| β | β |
|
TabSeparatedWithNamesAndTypes
| β | β |
|
TabSeparatedRawWithNames
| β | β |
|
TabSeparatedRawWithNamesAndTypes
| β | β |
|
Template
| β | β |
|
TemplateIgnoreSpaces
| β | β |
|
CSV
| β | β |
|
CSVWithNames
| β | β |
|
CSVWithNamesAndTypes
| β | β |
|
CustomSeparated
| β | β |
|
CustomSeparatedWithNames
| β | β |
|
CustomSeparatedWithNamesAndTypes
| β | β |
|
SQLInsert
| β | β |
|
Values
| β | β |
|
Vertical
| β | β |
|
JSON
| β | β |
|
JSONAsString
| β | β |
|
JSONAsObject
| β | β |
|
JSONStrings
| β | β |
|
JSONColumns
| β | β |
|
JSONColumnsWithMetadata
| β | β |
|
JSONCompact
| β | β |
|
JSONCompactStrings
| β | β |
|
JSONCompactColumns
| β | β |
|
JSONEachRow
| β | β |
|
PrettyJSONEachRow
| β | β |
|
JSONEachRowWithProgress
| β | β |
|
JSONStringsEachRow
| β | β |
|
JSONStringsEachRowWithProgress
| β | β |
|
JSONCompactEachRow | {"source_file": "formats.md"} | [
0.008481687866151333,
0.0019257800886407495,
-0.12341410666704178,
0.04193520545959473,
-0.02817068248987198,
0.04650994762778282,
0.06331246346235275,
0.02799791470170021,
-0.03134078532457352,
-0.02106126770377159,
0.033097345381975174,
-0.1097135990858078,
0.09910759329795837,
-0.109923... |
37a91ba6-89c4-4f4b-afbe-8211fef31433 | |
JSONStringsEachRow
| β | β |
|
JSONStringsEachRowWithProgress
| β | β |
|
JSONCompactEachRow
| β | β |
|
JSONCompactEachRowWithNames
| β | β |
|
JSONCompactEachRowWithNamesAndTypes
| β | β |
|
JSONCompactEachRowWithProgress
| β | β |
|
JSONCompactStringsEachRow
| β | β |
|
JSONCompactStringsEachRowWithNames
| β | β |
|
JSONCompactStringsEachRowWithNamesAndTypes
| β | β |
|
JSONCompactStringsEachRowWithProgress
| β | β |
|
JSONObjectEachRow
| β | β |
|
BSONEachRow
| β | β |
|
TSKV
| β | β |
|
Pretty
| β | β |
|
PrettyNoEscapes
| β | β |
|
PrettyMonoBlock
| β | β |
|
PrettyNoEscapesMonoBlock
| β | β |
|
PrettyCompact
| β | β |
|
PrettyCompactNoEscapes
| β | β |
|
PrettyCompactMonoBlock
| β | β |
|
PrettyCompactNoEscapesMonoBlock
| β | β |
|
PrettySpace
| β | β |
|
PrettySpaceNoEscapes
| β | β |
|
PrettySpaceMonoBlock
| β | β |
|
PrettySpaceNoEscapesMonoBlock
| β | β |
|
Prometheus
| β | β |
|
Protobuf
| β | β |
|
ProtobufSingle
| β | β |
|
ProtobufList
| β | β |
|
Avro
| β | β |
|
AvroConfluent
| β | β |
|
Parquet
| β | β |
|
ParquetMetadata
| β | β |
|
Arrow
| β | β |
|
ArrowStream
| β | β |
|
ORC | {"source_file": "formats.md"} | [
-0.0033957608975470066,
-0.03942533954977989,
-0.04639100655913353,
0.05735405534505844,
-0.048688679933547974,
-0.019617870450019836,
-0.025931933894753456,
0.011591394431889057,
-0.024019921198487282,
-0.03312968090176582,
0.046934958547353745,
-0.08551451563835144,
0.05162489786744118,
... |
75ec5fc9-3f3f-4b62-8dcc-63aad4cf2d3a | |
Arrow
| β | β |
|
ArrowStream
| β | β |
|
ORC
| β | β |
|
One
| β | β |
|
Npy
| β | β |
|
RowBinary
| β | β |
|
RowBinaryWithNames
| β | β |
|
RowBinaryWithNamesAndTypes
| β | β |
|
RowBinaryWithDefaults
| β | β |
|
Native
| β | β |
|
Null
| β | β |
|
Hash
| β | β |
|
XML
| β | β |
|
CapnProto
| β | β |
|
LineAsString
| β | β |
|
LineAsStringWithNames
| β | β |
|
LineAsStringWithNamesAndTypes
| β | β |
|
Regexp
| β | β |
|
RawBLOB
| β | β |
|
MsgPack
| β | β |
|
MySQLDump
| β | β |
|
DWARF
| β | β |
|
Markdown
| β | β |
|
Form
| β | β | | {"source_file": "formats.md"} | [
-0.03449348732829094,
-0.03425354138016701,
-0.08427758514881134,
-0.04067516699433327,
-0.03189879283308983,
-0.04185765981674194,
0.004530197009444237,
0.027863480150699615,
-0.0712091252207756,
0.0488802045583725,
0.05627666041254997,
-0.06593611091375351,
0.16885411739349365,
-0.058486... |
2c54911b-b7bf-434c-adcb-94b01c4071ab | You can control some format processing parameters with the ClickHouse settings. For more information read the
Settings
section.
Format schema {#formatschema}
The file name containing the format schema is set by the setting
format_schema
.
It's required to set this setting when it is used one of the formats
Cap'n Proto
and
Protobuf
.
The format schema is a combination of a file name and the name of a message type in this file, delimited by a colon,
e.g.
schemafile.proto:MessageType
.
If the file has the standard extension for the format (for example,
.proto
for
Protobuf
),
it can be omitted and in this case, the format schema looks like
schemafile:MessageType
.
If you input or output data via the
client
in interactive mode, the file name specified in the format schema
can contain an absolute path or a path relative to the current directory on the client.
If you use the client in the
batch mode
, the path to the schema must be relative due to security reasons.
If you input or output data via the
HTTP interface
the file name specified in the format schema
should be located in the directory specified in
format_schema_path
in the server configuration.
Skipping errors {#skippingerrors}
Some formats such as
CSV
,
TabSeparated
,
TSKV
,
JSONEachRow
,
Template
,
CustomSeparated
and
Protobuf
can skip broken row if parsing error occurred and continue parsing from the beginning of next row. See
input_format_allow_errors_num
and
input_format_allow_errors_ratio
settings.
Limitations:
- In case of parsing error
JSONEachRow
skips all data until the new line (or EOF), so rows must be delimited by
\n
to count errors correctly.
-
Template
and
CustomSeparated
use delimiter after the last column and delimiter between rows to find the beginning of next row, so skipping errors works only if at least one of them is not empty. | {"source_file": "formats.md"} | [
0.03619643673300743,
0.028897196054458618,
-0.05882981792092323,
-0.03731180727481842,
-0.005520511884242296,
-0.03327053785324097,
0.006108306348323822,
0.04422811418771744,
-0.013074938207864761,
0.046503402292728424,
0.0018471110379323363,
-0.07205532491207123,
-0.005511925555765629,
0.... |
8c8d8d48-a24a-4f1d-8027-c8bba4282ebb | description: 'Documentation for the Prometheus protocol support in ClickHouse'
sidebar_label: 'Prometheus protocols'
sidebar_position: 19
slug: /interfaces/prometheus
title: 'Prometheus Protocols'
doc_type: 'reference'
Prometheus protocols
Exposing metrics {#expose}
:::note
If you are using ClickHouse Cloud, you can expose metrics to Prometheus using the
Prometheus Integration
.
:::
ClickHouse can expose its own metrics for scraping from Prometheus:
```xml
9363
/metrics
true
true
true
true
true
true
Section
<prometheus.handlers>
can be used to make more extended handlers.
This section is similar to
but works for prometheus protocols:
xml
<prometheus>
<port>9363</port>
<handlers>
<my_rule_1>
<url>/metrics</url>
<handler>
<type>expose_metrics</type>
<metrics>true</metrics>
<asynchronous_metrics>true</asynchronous_metrics>
<events>true</events>
<errors>true</errors>
<histograms>true</histograms>
<dimensional_metrics>true</dimensional_metrics>
</handler>
</my_rule_1>
</handlers>
</prometheus>
Settings: | {"source_file": "prometheus.md"} | [
-0.09744097292423248,
0.01193605363368988,
-0.029355963692069054,
0.039722081273794174,
-0.0035969025921076536,
-0.1175123080611229,
-0.00010588533768896013,
-0.046191614121198654,
-0.047082796692848206,
-0.02940347231924534,
0.00534020783379674,
-0.06309337168931961,
0.008660138584673405,
... |
905f6134-053f-47c0-b7c3-7c03f5b08393 | Settings:
| Name | Default | Description |
|------------------------------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
port
| none | Port for serving the exposing metrics protocol. |
|
endpoint
|
/metrics
| HTTP endpoint for scraping metrics by prometheus server. Starts with
/
. Should not be used with the
<handlers>
section. |
|
url
/
headers
/
method
| none | Filters used to find a matching handler for a request. Similar to the fields with the same names in the
<http_handlers>
section. |
|
metrics
| true | Expose metrics from the
system.metrics
table. |
|
asynchronous_metrics
| true | Expose current metrics values from the
system.asynchronous_metrics
table. |
|
events
| true | Expose metrics from the
system.events
table. |
|
errors
| true | Expose the number of errors by error codes occurred since the last server restart. This information could be obtained from the
system.errors
as well. |
|
histograms
| true | Expose histogram metrics from
system.histogram_metrics
|
|
dimensional_metrics
| true | Expose dimensional metrics from
system.dimensional_metrics
|
Check (replace
127.0.0.1
with the IP addr or hostname of your ClickHouse server):
bash
curl 127.0.0.1:9363/metrics
Remote-write protocol {#remote-write}
ClickHouse supports the
remote-write
protocol.
Data are received by this protocol and written to a
TimeSeries
table
(which should be created beforehand).
xml
<prometheus>
<port>9363</port>
<handlers>
<my_rule_1>
<url>/write</url>
<handler>
<type>remote_write</type>
<database>db_name</database>
<table>time_series_table</table>
</handler>
</my_rule_1>
</handlers>
</prometheus>
Settings: | {"source_file": "prometheus.md"} | [
-0.02200070023536682,
0.02604539319872856,
-0.11286847293376923,
-0.05419408529996872,
-0.08265211433172226,
0.021259717643260956,
0.0005575126269832253,
0.014004618860781193,
0.02579379454255104,
-0.04096599295735359,
0.07061802595853806,
-0.026637673377990723,
-0.04954273998737335,
-0.02... |
7dcb9070-0e4f-46de-9c91-cd48f98d3e91 | Settings:
| Name | Default | Description |
|------------------------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
port
| none | Port for serving the
remote-write
protocol. |
|
url
/
headers
/
method
| none | Filters used to find a matching handler for a request. Similar to the fields with the same names in the
<http_handlers>
section. |
|
table
| none | The name of a
TimeSeries
table to write data received by the
remote-write
protocol. This name can optionally contain the name of a database too. |
|
database
| none | The name of a database where the table specified in the
table
setting is located if it's not specified in the
table
setting. |
Remote-read protocol {#remote-read}
ClickHouse supports the
remote-read
protocol.
Data are read from a
TimeSeries
table and sent via this protocol.
xml
<prometheus>
<port>9363</port>
<handlers>
<my_rule_1>
<url>/read</url>
<handler>
<type>remote_read</type>
<database>db_name</database>
<table>time_series_table</table>
</handler>
</my_rule_1>
</handlers>
</prometheus>
Settings: | {"source_file": "prometheus.md"} | [
-0.010641203261911869,
0.02275925502181053,
-0.05466742813587189,
-0.03330705687403679,
-0.08182521164417267,
0.0292845219373703,
0.007413509301841259,
0.011380748823285103,
0.026312120258808136,
-0.062225304543972015,
0.04248277470469475,
-0.04980122670531273,
-0.03565997630357742,
-0.045... |
01190a94-87cd-4b3b-a3e7-2179738fdcea | Settings:
| Name | Default | Description |
|------------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
port
| none | Port for serving the
remote-read
protocol. |
|
url
/
headers
/
method
| none | Filters used to find a matching handler for a request. Similar to the fields with the same names in the
<http_handlers>
section. |
|
table
| none | The name of a
TimeSeries
table to read data to send by the
remote-read
protocol. This name can optionally contain the name of a database too. |
|
database
| none | The name of a database where the table specified in the
table
setting is located if it's not specified in the
table
setting. |
Configuration for multiple protocols {#multiple-protocols}
Multiple protocols can be specified together in one place:
xml
<prometheus>
<port>9363</port>
<handlers>
<my_rule_1>
<url>/metrics</url>
<handler>
<type>expose_metrics</type>
<metrics>true</metrics>
<asynchronous_metrics>true</asynchronous_metrics>
<events>true</events>
<errors>true</errors>
<histograms>true</histograms>
<dimensional_metrics>true</dimensional_metrics>
</handler>
</my_rule_1>
<my_rule_2>
<url>/write</url>
<handler>
<type>remote_write</type>
<table>db_name.time_series_table</table>
</handler>
</my_rule_2>
<my_rule_3>
<url>/read</url>
<handler>
<type>remote_read</type>
<table>db_name.time_series_table</table>
</handler>
</my_rule_3>
</handlers>
</prometheus> | {"source_file": "prometheus.md"} | [
-0.01835065521299839,
0.013662350364029408,
-0.09514331817626953,
-0.04591146111488342,
-0.08188188821077347,
0.010966011323034763,
-0.0048087602481245995,
0.008924546651542187,
0.023448394611477852,
-0.04994819313287735,
0.05450240895152092,
-0.027562623843550682,
-0.04898149520158768,
-0... |
9a992c9d-ac2f-469f-861b-f7885ba17df6 | description: 'Overview of network interfaces, drivers, and tools for connecting to
ClickHouse'
keywords: ['clickhouse', 'network', 'interfaces', 'http', 'tcp', 'grpc', 'command-line',
'client', 'jdbc', 'odbc', 'driver']
sidebar_label: 'Overview'
slug: /interfaces/overview
title: 'Drivers and Interfaces'
doc_type: 'reference'
Drivers and Interfaces
ClickHouse provides two network interfaces (they can be optionally wrapped in TLS for additional security):
HTTP
, which is documented and easy to use directly.
Native TCP
, which has less overhead.
In most cases it is recommended to use an appropriate tool or library instead of interacting with those directly. The following are officially supported by ClickHouse:
Command-line client
JDBC driver
ODBC driver
C++ client library
ClickHouse also supports two RPC protocols:
-
gRPC protocol
specially designed for ClickHouse.
-
Apache Arrow Flight
.
ClickHouse server provides embedded visual interfaces for power users:
Play UI: open
/play
in the browser;
Advanced Dashboard: open
/dashboard
in the browser;
Binary symbols viewer for ClickHouse engineers: open
/binary
in the browser;
There are also a wide range of third-party libraries for working with ClickHouse:
Client libraries
Integrations
Visual interfaces | {"source_file": "overview.md"} | [
-0.04593561589717865,
-0.023569393903017044,
-0.08342273533344269,
-0.03537513315677643,
-0.08897517621517181,
-0.03797188401222229,
0.02655276469886303,
0.019206637516617775,
-0.03456537798047066,
-0.061426691710948944,
-0.011331308633089066,
-0.009119858033955097,
-0.03901300206780434,
-... |
a4c1e12e-6a25-4083-ae30-70ff5ef01e06 | description: 'Documentation for the native TCP interface in ClickHouse'
sidebar_label: 'Native interface (TCP)'
sidebar_position: 18
slug: /interfaces/tcp
title: 'Native interface (TCP)'
doc_type: 'reference'
Native interface (TCP)
The native protocol is used in the
command-line client
, for inter-server communication during distributed query processing, and also in other C++ programs. Unfortunately, native ClickHouse protocol does not have formal specification yet, but it can be reverse-engineered from ClickHouse source code (starting
around here
) and/or by intercepting and analyzing TCP traffic. | {"source_file": "tcp.md"} | [
-0.060890085995197296,
0.031954459846019745,
-0.040677547454833984,
-0.05997065454721451,
-0.08479964733123779,
-0.052132561802864075,
0.06079825386404991,
-0.021621445193886757,
-0.04777335003018379,
-0.04509083926677704,
-0.03668346628546715,
-0.051647212356328964,
-0.016845179721713066,
... |
92f397af-3eaf-4253-a464-fc173fb12dac | description: 'Documentation for the ClickHouse command-line client interface'
sidebar_label: 'ClickHouse Client'
sidebar_position: 17
slug: /interfaces/cli
title: 'ClickHouse Client'
doc_type: 'reference'
import Image from '@theme/IdealImage';
import cloud_connect_button from '@site/static/images/_snippets/cloud-connect-button.png';
import connection_details_native from '@site/static/images/_snippets/connection-details-native.png';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
ClickHouse provides a native command-line client for executing SQL queries directly against a ClickHouse server.
It supports both interactive mode (for live query execution) and batch mode (for scripting and automation).
Query results can be displayed in the terminal or exported to a file, with support for all ClickHouse output
formats
, such as Pretty, CSV, JSON, and more.
The client provides real-time feedback on query execution with a progress bar and the number of rows read, bytes processed and query execution time.
It supports both
command-line options
and
configuration files
.
Install {#install}
To download ClickHouse, run:
bash
curl https://clickhouse.com/ | sh
To also install it, run:
bash
sudo ./clickhouse install
See
Install ClickHouse
for more installation options.
Different client and server versions are compatible with one another, but some features may not be available in older clients. We recommend using the same version for client and server.
Run {#run}
:::note
If you only downloaded but did not install ClickHouse, use
./clickhouse client
instead of
clickhouse-client
.
:::
To connect to a ClickHouse server, run:
```bash
$ clickhouse-client --host server
ClickHouse client version 24.12.2.29 (official build).
Connecting to server:9000 as user default.
Connected to ClickHouse server version 24.12.2.
:)
```
Specify additional connection details as necessary: | {"source_file": "cli.md"} | [
0.002742650918662548,
-0.015398628078401089,
-0.08189269155263901,
0.05428599566221237,
-0.04759230092167854,
0.025582924485206604,
0.05332612246274948,
-0.009681975468993187,
-0.06284765154123306,
0.029828401282429695,
0.02738083526492119,
-0.007816729135811329,
0.08634011447429657,
-0.09... |
00713197-7e36-4275-baec-34e7c6b449d5 | :)
```
Specify additional connection details as necessary:
| Option | Description |
|----------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
--port <port>
| The port ClickHouse server is accepting connections on. The default ports are 9440 (TLS) and 9000 (no TLS). Note that ClickHouse Client uses the native protocol and not HTTP(S). |
|
-s [ --secure ]
| Whether to use TLS (usually autodetected). |
|
-u [ --user ] <username>
| The database user to connect as. Connects as the
default
user by default. |
|
--password <password>
| The password of the database user. You can also specify the password for a connection in the configuration file. If you do not specify the password, the client will ask for it. |
|
-c [ --config ] <path-to-file>
| The location of the configuration file for ClickHouse Client, if it is not at one of the default locations. See
Configuration Files
. |
|
--connection <name>
| The name of preconfigured connection details from the
configuration file
. |
For a complete list of command-line options, see
Command Line Options
.
Connecting to ClickHouse Cloud {#connecting-cloud}
The details for your ClickHouse Cloud service are available in the ClickHouse Cloud console. Select the service that you want to connect to and click
Connect
:
Choose
Native
, and the details are shown with an example
clickhouse-client
command:
Storing connections in a configuration file {#connection-credentials}
You can store connection details for one or more ClickHouse servers in a
configuration file
.
The format looks like this:
xml
<config>
<connections_credentials>
<connection>
<name>default</name>
<hostname>hostname</hostname>
<port>9440</port>
<secure>1</secure>
<user>default</user>
<password>password</password>
<!-- <history_file></history_file> -->
<!-- <history_max_entries></history_max_entries> -->
<!-- <accept-invalid-certificate>false</accept-invalid-certificate> -->
<!-- <prompt></prompt> -->
</connection>
</connections_credentials>
</config> | {"source_file": "cli.md"} | [
-0.023142864927649498,
-0.011915916576981544,
-0.12052687257528305,
-0.07406134903430939,
-0.12514305114746094,
-0.004632559604942799,
-0.0233451034873724,
-0.04916366562247276,
-0.012812148779630661,
-0.031160037964582443,
0.09865543246269226,
0.008821405470371246,
0.0022802322637289762,
... |
9e27dc76-be45-488b-9fec-0a397af73fb5 | See the
section on configuration files
for more information.
:::note
To concentrate on the query syntax, the rest of the examples leave off the connection details (
--host
,
--port
, etc.). Remember to add them when you use the commands.
:::
Interactive mode {#interactive-mode}
Using interactive mode {#using-interactive-mode}
To run ClickHouse in interactive mode, simply execute:
bash
clickhouse-client
This opens the Read-Eval-Print Loop (REPL) where you can start typing SQL queries interactively.
Once connected, you'll get a prompt where you can enter queries:
```bash
ClickHouse client version 25.x.x.x
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 25.x.x.x
hostname :)
```
In interactive mode, the default output format is
PrettyCompact
.
You can change the format in the
FORMAT
clause of the query or by specifying the
--format
command-line option.
To use the Vertical format, you can use
--vertical
or specify
\G
at the end of the query.
In this format, each value is printed on a separate line, which is convenient for wide tables.
In interactive mode, by default whatever was entered is run when you press
Enter
.
A semicolon is not necessary at the end of the query.
You can start the client with the
-m, --multiline
parameter.
To enter a multiline query, enter a backslash
\
before the line feed.
After you press
Enter
, you will be asked to enter the next line of the query.
To run the query, end it with a semicolon and press
Enter
.
ClickHouse Client is based on
replxx
(similar to
readline
) so it uses familiar keyboard shortcuts and keeps a history.
The history is written to
~/.clickhouse-client-history
by default.
To exit the client, press
Ctrl+D
, or enter one of the following instead of a query:
-
exit
or
exit;
-
quit
or
quit;
-
q
,
Q
or
:q
-
logout
or
logout;
Query processing information {#processing-info}
When processing a query, the client shows:
Progress, which is updated no more than 10 times per second by default.
For quick queries, the progress might not have time to be displayed.
The formatted query after parsing, for debugging.
The result in the specified format.
The number of lines in the result, the time passed, and the average speed of query processing.
All data amounts refer to uncompressed data.
You can cancel a long query by pressing
Ctrl+C
.
However, you will still need to wait for a little for the server to abort the request.
It is not possible to cancel a query at certain stages.
If you do not wait and press
Ctrl+C
a second time, the client will exit.
ClickHouse Client allows passing external data (external temporary tables) for querying.
For more information, see the section
External data for query processing
.
Aliases {#cli_aliases}
You can use the following aliases from within the REPL:
\l
- SHOW DATABASES
\d
- SHOW TABLES
\c <DATABASE>
- USE DATABASE | {"source_file": "cli.md"} | [
0.06912355124950409,
-0.03410870209336281,
-0.10468871891498566,
0.07471133023500443,
-0.09842386096715927,
-0.03419452905654907,
0.0723184272646904,
0.005032103043049574,
-0.08076144754886627,
-0.0017133729998022318,
0.02444186992943287,
-0.02873985655605793,
0.09954312443733215,
-0.05752... |
0d3c5d15-0cf6-4c21-a2cf-fc7c7540bc1d | Aliases {#cli_aliases}
You can use the following aliases from within the REPL:
\l
- SHOW DATABASES
\d
- SHOW TABLES
\c <DATABASE>
- USE DATABASE
.
- repeat the last query
Keyboard shortcuts {#keyboard_shortcuts}
Alt (Option) + Shift + e
- open editor with the current query. It is possible to specify the editor to use with the environment variable
EDITOR
. By default,
vim
is used.
Alt (Option) + #
- comment line.
Ctrl + r
- fuzzy history search.
The full list with all available keyboard shortcuts is available at
replxx
.
:::tip
To configure the correct work of the meta key (Option) on MacOS:
iTerm2: Go to Preferences -> Profile -> Keys -> Left Option key and click Esc+
:::
Batch mode {#batch-mode}
Using batch mode {#using-batch-mode}
Instead of using ClickHouse Client interactively, you can run it in batch mode.
In batch mode, ClickHouse executes a single query and exits immediately - there's no interactive prompt or loop.
You can specify a single query like this:
bash
$ clickhouse-client "SELECT sum(number) FROM numbers(10)"
45
You can also use the
--query
command-line option:
bash
$ clickhouse-client --query "SELECT uniq(number) FROM numbers(10)"
10
You can provide a query on
stdin
:
bash
$ echo "SELECT avg(number) FROM numbers(10)" | clickhouse-client
4.5
Assuming the existence of a table
messages
, you can also insert data from the command line:
bash
$ echo "Hello\nGoodbye" | clickhouse-client --query "INSERT INTO messages FORMAT CSV"
When
--query
is specified, any input is appended to the request after a line feed.
Inserting a CSV file into a remote ClickHouse service {#cloud-example}
This example is inserting a sample dataset CSV file,
cell_towers.csv
into an existing table
cell_towers
in the
default
database:
bash
clickhouse-client --host HOSTNAME.clickhouse.cloud \
--port 9440 \
--user default \
--password PASSWORD \
--query "INSERT INTO cell_towers FORMAT CSVWithNames" \
< cell_towers.csv
Examples of inserting data from the command line {#more-examples}
There are several ways to insert data from the command line.
The example below inserts two rows of CSV data into a ClickHouse table using batch mode:
bash
echo -ne "1, 'some text', '2016-08-14 00:00:00'\n2, 'some more text', '2016-08-14 00:00:01'" | \
clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
In the example below
cat <<_EOF
starts a heredoc that will read everything until it sees
_EOF
again, then outputs it:
bash
cat <<_EOF | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
3, 'some text', '2016-08-14 00:00:00'
4, 'some more text', '2016-08-14 00:00:01'
_EOF
In the example below, the contents of file.csv are output to stdout using
cat
, and piped into
clickhouse-client
as input:
bash
cat file.csv | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV"; | {"source_file": "cli.md"} | [
0.023075131699442863,
-0.0920891985297203,
-0.007419845554977655,
0.025695238262414932,
-0.041686009615659714,
0.04347296059131622,
0.04642758518457413,
0.05561046674847603,
-0.02777048386633396,
0.047499820590019226,
0.05941686034202576,
-0.05271504819393158,
0.08244933187961578,
-0.01486... |
d43ae904-1202-4014-8a4a-872f3d3e0707 | bash
cat file.csv | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
In batch mode, the default data
format
is
TabSeparated
.
You can set the format in the
FORMAT
clause of the query as shown in the example above.
Queries with parameters {#cli-queries-with-parameters}
You can specify parameters in a query and pass values to it with command-line options.
This avoids formatting a query with specific dynamic values on the client side.
For example:
bash
$ clickhouse-client --param_parName="[1, 2]" --query "SELECT {parName: Array(UInt16)}"
[1,2]
It is also possible to set parameters from within an
interactive session
:
```text
$ clickhouse-client
ClickHouse client version 25.X.X.XXX (official build).
highlight-next-line
:) SET param_parName='[1, 2]';
SET param_parName = '[1, 2]'
Query id: 7ac1f84e-e89a-4eeb-a4bb-d24b8f9fd977
Ok.
0 rows in set. Elapsed: 0.000 sec.
highlight-next-line
:) SELECT {parName:Array(UInt16)}
SELECT {parName:Array(UInt16)}
Query id: 0358a729-7bbe-4191-bb48-29b063c548a7
ββ_CAST([1, 2]β―y(UInt16)')ββ
1. β [1,2] β
ββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.006 sec.
```
Query syntax {#cli-queries-with-parameters-syntax}
In the query, place the values that you want to fill using command-line parameters in braces in the following format:
sql
{<name>:<data type>} | {"source_file": "cli.md"} | [
0.05120411887764931,
0.04982366785407066,
-0.09448690712451935,
0.062336746603250504,
-0.1680227518081665,
0.016629204154014587,
0.08855954557657242,
0.02404230833053589,
-0.04130232706665993,
-0.008471478708088398,
-0.03283698484301567,
-0.07803984731435776,
0.09213012456893921,
-0.044018... |
50afd1e3-e9c3-4e89-89f5-d1eb7b058c7f | Query syntax {#cli-queries-with-parameters-syntax}
In the query, place the values that you want to fill using command-line parameters in braces in the following format:
sql
{<name>:<data type>}
| Parameter | Description |
|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
name
| Placeholder identifier. The corresponding command-line option is
--param_<name> = value
. |
|
data type
|
Data type
of the parameter.
For example, a data structure like
(integer, ('string', integer))
can have the
Tuple(UInt8, Tuple(String, UInt8))
data type (you can also use other
integer
types).
It is also possible to pass the table name, database name, and column names as parameters, in that case you would need to use
Identifier
as the data type. |
Examples {#cli-queries-with-parameters-examples}
```bash
$ clickhouse-client --param_tuple_in_tuple="(10, ('dt', 10))" \
--query "SELECT * FROM table WHERE val = {tuple_in_tuple:Tuple(UInt8, Tuple(String, UInt8))}"
$ clickhouse-client --param_tbl="numbers" --param_db="system" --param_col="number" --param_alias="top_ten" \
--query "SELECT {col:Identifier} as {alias:Identifier} FROM {db:Identifier}.{tbl:Identifier} LIMIT 10"
```
AI-powered SQL generation {#ai-sql-generation}
ClickHouse Client includes built-in AI assistance for generating SQL queries from natural language descriptions. This feature helps users write complex queries without deep SQL knowledge.
The AI assistance works out of the box if you have either
OPENAI_API_KEY
or
ANTHROPIC_API_KEY
environment variable set. For more advanced configuration, see the
Configuration
section.
Usage {#ai-sql-generation-usage}
To use AI SQL generation, prefix your natural language query with
??
: | {"source_file": "cli.md"} | [
-0.005151867866516113,
0.07190705835819244,
0.006160432007163763,
0.049036215990781784,
-0.15334218740463257,
-0.0015968958614394069,
0.07104367017745972,
0.07924873381853104,
-0.03995468467473984,
0.00025664674467407167,
-0.018174489960074425,
-0.13689236342906952,
0.08771705627441406,
-0... |
4d1d9327-35ca-443e-89f6-ab9056b7c278 | Usage {#ai-sql-generation-usage}
To use AI SQL generation, prefix your natural language query with
??
:
bash
:) ?? show all users who made purchases in the last 30 days
The AI will:
1. Explore your database schema automatically
2. Generate appropriate SQL based on the discovered tables and columns
3. Execute the generated query immediately
Example {#ai-sql-generation-example}
```bash
:) ?? count orders by product category
Starting AI SQL generation with schema discovery...
ββββββββββββββββββββββββββββββββββββββββββββββββββ
π list_databases
β system, default, sales_db
π list_tables_in_database
database: sales_db
β orders, products, categories
π get_schema_for_table
database: sales_db
table: orders
β CREATE TABLE orders (order_id UInt64, product_id UInt64, quantity UInt32, ...)
β¨ SQL query generated successfully!
ββββββββββββββββββββββββββββββββββββββββββββββββββ
SELECT
c.name AS category,
COUNT(DISTINCT o.order_id) AS order_count
FROM sales_db.orders o
JOIN sales_db.products p ON o.product_id = p.product_id
JOIN sales_db.categories c ON p.category_id = c.category_id
GROUP BY c.name
ORDER BY order_count DESC
```
Configuration {#ai-sql-generation-configuration}
AI SQL generation requires configuring an AI provider in your ClickHouse Client configuration file. You can use either OpenAI, Anthropic, or any OpenAI-compatible API service.
Environment-based fallback {#ai-sql-generation-fallback}
If no AI configuration is specified in the config file, ClickHouse Client will automatically try to use environment variables:
First checks for
OPENAI_API_KEY
environment variable
If not found, checks for
ANTHROPIC_API_KEY
environment variable
If neither is found, AI features will be disabled
This allows quick setup without configuration files:
```bash
Using OpenAI
export OPENAI_API_KEY=your-openai-key
clickhouse-client
Using Anthropic
export ANTHROPIC_API_KEY=your-anthropic-key
clickhouse-client
```
Configuration file {#ai-sql-generation-configuration-file}
For more control over AI settings, configure them in your ClickHouse Client configuration file located at:
-
~/.clickhouse-client/config.xml
(XML format)
-
~/.clickhouse-client/config.yaml
(YAML format)
- Or specify a custom location with
--config-file
```xml
your-api-key-here
<!-- Required: Provider type (openai, anthropic) -->
<provider>openai</provider>
<!-- Model to use (defaults vary by provider) -->
<model>gpt-4o</model>
<!-- Optional: Custom API endpoint for OpenAI-compatible services -->
<!-- <base_url>https://openrouter.ai/api</base_url> -->
<!-- Schema exploration settings -->
<enable_schema_access>true</enable_schema_access>
<!-- Generation parameters -->
<temperature>0.0</temperature>
<max_tokens>1000</max_tokens>
<timeout_seconds>30</timeout_seconds>
<max_steps>10</max_steps> | {"source_file": "cli.md"} | [
0.0080886110663414,
-0.003451302647590637,
-0.06615181267261505,
0.04827630892395973,
-0.06460834294557571,
0.0767362043261528,
0.06373625248670578,
0.022936932742595673,
-0.03412972018122673,
0.02258487232029438,
0.1027991846203804,
-0.056487131863832474,
0.09434959292411804,
-0.066183291... |
0aac555c-26c8-41b1-bf79-26446bd4d099 | <!-- Generation parameters -->
<temperature>0.0</temperature>
<max_tokens>1000</max_tokens>
<timeout_seconds>30</timeout_seconds>
<max_steps>10</max_steps>
<!-- Optional: Custom system prompt -->
<!-- <system_prompt>You are an expert ClickHouse SQL assistant...</system_prompt> -->
</ai>
</config>
```
```yaml
ai:
# Required: Your API key (or set via environment variable)
api_key: your-api-key-here
# Required: Provider type (openai, anthropic)
provider: openai
# Model to use
model: gpt-4o
# Optional: Custom API endpoint for OpenAI-compatible services
# base_url: https://openrouter.ai/api
# Enable schema access - allows AI to query database/table information
enable_schema_access: true
# Generation parameters
temperature: 0.0 # Controls randomness (0.0 = deterministic)
max_tokens: 1000 # Maximum response length
timeout_seconds: 30 # Request timeout
max_steps: 10 # Maximum schema exploration steps
# Optional: Custom system prompt
# system_prompt: |
# You are an expert ClickHouse SQL assistant. Convert natural language to SQL.
# Focus on performance and use ClickHouse-specific optimizations.
# Always return executable SQL without explanations.
```
Using OpenAI-compatible APIs (e.g., OpenRouter):
yaml
ai:
provider: openai # Use 'openai' for compatibility
api_key: your-openrouter-api-key
base_url: https://openrouter.ai/api/v1
model: anthropic/claude-3.5-sonnet # Use OpenRouter model naming
Minimal configuration examples:
```yaml
Minimal config - uses environment variable for API key
ai:
provider: openai # Will use OPENAI_API_KEY env var
No config at all - automatic fallback
(Empty or no ai section - will try OPENAI_API_KEY then ANTHROPIC_API_KEY)
Only override model - uses env var for API key
ai:
provider: openai
model: gpt-3.5-turbo
```
Parameters {#ai-sql-generation-parameters}
Required parameters
- `api_key` - Your API key for the AI service. Can be omitted if set via environment variable:
- OpenAI: `OPENAI_API_KEY`
- Anthropic: `ANTHROPIC_API_KEY`
- Note: API key in config file takes precedence over environment variable
- `provider` - The AI provider: `openai` or `anthropic`
- If omitted, uses automatic fallback based on available environment variables
Model configuration
- `model` - The model to use (default: provider-specific)
- OpenAI: `gpt-4o`, `gpt-4`, `gpt-3.5-turbo`, etc.
- Anthropic: `claude-3-5-sonnet-20241022`, `claude-3-opus-20240229`, etc.
- OpenRouter: Use their model naming like `anthropic/claude-3.5-sonnet`
Connection settings
- `base_url` - Custom API endpoint for OpenAI-compatible services (optional)
- `timeout_seconds` - Request timeout in seconds (default: `30`)
Schema exploration | {"source_file": "cli.md"} | [
0.031133458018302917,
-0.05022117868065834,
-0.11714180558919907,
0.07907694578170776,
0.015589896589517593,
-0.07150771468877792,
0.021158425137400627,
0.039341311901807785,
-0.015460411086678505,
0.015186871401965618,
0.01403834018856287,
-0.09392297267913818,
0.0517503060400486,
-0.0227... |
a6ceb860-5313-4d2b-b32f-e3d18b3b9da8 | Connection settings
- `base_url` - Custom API endpoint for OpenAI-compatible services (optional)
- `timeout_seconds` - Request timeout in seconds (default: `30`)
Schema exploration
- `enable_schema_access` - Allow AI to explore database schemas (default: `true`)
- `max_steps` - Maximum tool-calling steps for schema exploration (default: `10`)
Generation parameters
- `temperature` - Controls randomness, 0.0 = deterministic, 1.0 = creative (default: `0.0`)
- `max_tokens` - Maximum response length in tokens (default: `1000`)
- `system_prompt` - Custom instructions for the AI (optional)
How it works {#ai-sql-generation-how-it-works}
The AI SQL generator uses a multi-step process:
Schema Discovery
The AI uses built-in tools to explore your database
- Lists available databases
- Discovers tables within relevant databases
- Examines table structures via
CREATE TABLE
statements
Query Generation
Based on the discovered schema, the AI generates SQL that:
- Matches your natural language intent
- Uses correct table and column names
- Applies appropriate joins and aggregations
Execution
The generated SQL is automatically executed and results are displayed
Limitations {#ai-sql-generation-limitations}
Requires an active internet connection
API usage is subject to rate limits and costs from the AI provider
Complex queries may require multiple refinements
The AI has read-only access to schema information, not actual data
Security {#ai-sql-generation-security}
API keys are never sent to ClickHouse servers
The AI only sees schema information (table/column names and types), not actual data
All generated queries respect your existing database permissions
Connection string {#connection_string}
Usage {#connection-string-usage}
ClickHouse Client alternatively supports connecting to a ClickHouse server using a connection string similar to
MongoDB
,
PostgreSQL
,
MySQL
. It has the following syntax:
text
clickhouse:[//[user[:password]@][hosts_and_ports]][/database][?query_parameters] | {"source_file": "cli.md"} | [
-0.008013078011572361,
-0.05444790795445442,
-0.1235986053943634,
0.08856852352619171,
0.0016051321290433407,
-0.0652458593249321,
-0.009596875868737698,
0.006973500829190016,
0.0018267746781930327,
0.00908398162573576,
-0.0038011157885193825,
-0.08584000170230865,
0.049404412508010864,
-0... |
5f8ab575-5bb3-40ca-bc9e-962dac022a6e | text
clickhouse:[//[user[:password]@][hosts_and_ports]][/database][?query_parameters]
| Component (all optional) | Description | Default |
|--------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------| -----------------|
|
user
| Database username. |
default
|
|
password
| Database user password. If
:
is specified and the password is blank, the client will prompt for the user's password. | - |
|
hosts_and_ports
| List of hosts and optional ports
host[:port] [, host:[port]], ...
. |
localhost:9000
|
|
database
| Database name. |
default
|
|
query_parameters
| List of key-value pairs
param1=value1[,¶m2=value2], ...
. For some parameters, no value is required. Parameter names and values are case-sensitive. | - |
Notes {#connection-string-notes}
If the username, password or database was specified in the connection string, it cannot be specified using
--user
,
--password
or
--database
(and vice versa).
The host component can either be a hostname or an IPv4 or IPv6 address.
IPv6 addresses should be in square brackets:
text
clickhouse://[2001:db8::1234]
Connection strings can contain multiple hosts.
ClickHouse Client will try to connect to these hosts in order (from left to right).
After the connection is established, no attempt to connect to the remaining hosts is made.
The connection string must be specified as the first argument of
clickHouse-client
.
The connection string can be combined with an arbitrary number of other
command-line options
except
--host
and
--port
.
The following keys are allowed for
query_parameters
:
| Key | Description |
|-------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|
|
secure
(or
s
) | If specified, the client will connect to the server over a secure connection (TLS). See
--secure
in the
command-line options
. |
Percent encoding | {"source_file": "cli.md"} | [
0.06515306979417801,
0.03561881557106972,
-0.06715229898691177,
0.05482588708400726,
-0.07425298541784286,
0.018350739032030106,
0.08110252767801285,
-0.023379119113087654,
-0.044595666229724884,
-0.049902841448783875,
0.028905658051371574,
-0.05829023942351341,
0.0742831751704216,
-0.0484... |
1aa4b326-3ca8-4e06-a83c-159a62c86de7 | Percent encoding
Non-US ASCII, spaces and special characters in the following parameters must be
percent-encoded
:
-
user
-
password
-
hosts
-
database
-
query parameters
Examples {#connection_string_examples}
Connect to
localhost
on port 9000 and execute the query
SELECT 1
.
bash
clickhouse-client clickhouse://localhost:9000 --query "SELECT 1"
Connect to
localhost
as user
john
with password
secret
, host
127.0.0.1
and port
9000
bash
clickhouse-client clickhouse://john:secret@127.0.0.1:9000
Connect to
localhost
as the
default
user, host with IPV6 address
[::1]
and port
9000
.
bash
clickhouse-client clickhouse://[::1]:9000
Connect to
localhost
on port 9000 in multiline mode.
bash
clickhouse-client clickhouse://localhost:9000 '-m'
Connect to
localhost
using port 9000 as the user
default
.
```bash
clickhouse-client clickhouse://default@localhost:9000
equivalent to:
clickhouse-client clickhouse://localhost:9000 --user default
```
Connect to
localhost
on port 9000 and default to the
my_database
database.
```bash
clickhouse-client clickhouse://localhost:9000/my_database
equivalent to:
clickhouse-client clickhouse://localhost:9000 --database my_database
```
Connect to
localhost
on port 9000 and default to the
my_database
database specified in the connection string and a secure connection using the shorthanded
s
parameter.
```bash
clickhouse-client clickhouse://localhost/my_database?s
equivalent to:
clickhouse-client clickhouse://localhost/my_database -s
```
Connect to the default host using the default port, the default user, and the default database.
bash
clickhouse-client clickhouse:
Connect to the default host using the default port, as the user
my_user
and no password.
```bash
clickhouse-client clickhouse://my_user@
Using a blank password between : and @ means to asking the user to enter the password before starting the connection.
clickhouse-client clickhouse://my_user:@
```
Connect to
localhost
using the email as the user name.
@
symbol is percent encoded to
%40
.
bash
clickhouse-client clickhouse://some_user%40some_mail.com@localhost:9000
Connect to one of two hosts:
192.168.1.15
,
192.168.1.25
.
bash
clickhouse-client clickhouse://192.168.1.15,192.168.1.25
Query ID format {#query-id-format}
In interactive mode ClickHouse Client shows the query ID for every query. By default, the ID is formatted like this:
sql
Query id: 927f137d-00f1-4175-8914-0dd066365e96
A custom format may be specified in a configuration file inside a
query_id_formats
tag. The
{query_id}
placeholder in the format string is replaced with the query ID. Several format strings are allowed inside the tag.
This feature can be used to generate URLs to facilitate profiling of queries.
Example
xml
<config>
<query_id_formats>
<speedscope>http://speedscope-host/#profileURL=qp%3Fid%3D{query_id}</speedscope>
</query_id_formats>
</config> | {"source_file": "cli.md"} | [
0.041581086814403534,
-0.016449980437755585,
-0.09251993149518967,
-0.031320273876190186,
-0.12563170492649078,
-0.023031750693917274,
0.0463605634868145,
-0.02142021246254444,
-0.021626178175210953,
-0.05376441776752472,
0.001088247518055141,
-0.05132438614964485,
0.058935437351465225,
0.... |
d787ab35-92ad-40b2-92a7-12423a7899bf | Example
xml
<config>
<query_id_formats>
<speedscope>http://speedscope-host/#profileURL=qp%3Fid%3D{query_id}</speedscope>
</query_id_formats>
</config>
With the configuration above, the ID of a query is shown in the following format:
response
speedscope:http://speedscope-host/#profileURL=qp%3Fid%3Dc8ecc783-e753-4b38-97f1-42cddfb98b7d
Configuration files {#configuration_files}
ClickHouse Client uses the first existing file of the following:
A file that is defined with the
-c [ -C, --config, --config-file ]
parameter.
./clickhouse-client.[xml|yaml|yml]
~/.clickhouse-client/config.[xml|yaml|yml]
/etc/clickhouse-client/config.[xml|yaml|yml]
See the sample configuration file in the ClickHouse repository:
clickhouse-client.xml
xml
<config>
<user>username</user>
<password>password</password>
<secure>true</secure>
<openSSL>
<client>
<caConfig>/etc/ssl/cert.pem</caConfig>
</client>
</openSSL>
</config>
yaml
user: username
password: 'password'
secure: true
openSSL:
client:
caConfig: '/etc/ssl/cert.pem'
Environment variable options {#environment-variable-options}
The user name, password and host can be set via environment variables
CLICKHOUSE_USER
,
CLICKHOUSE_PASSWORD
and
CLICKHOUSE_HOST
.
Command line arguments
--user
,
--password
or
--host
, or a
connection string
(if specified) take precedence over environment variables.
Command-line options {#command-line-options}
All command-line options can be specified directly on the command line or as defaults in the
configuration file
.
General options {#command-line-options-general} | {"source_file": "cli.md"} | [
-0.033640820533037186,
0.06533274799585342,
-0.08287464082241058,
0.028745638206601143,
-0.054864302277565,
-0.006425594445317984,
0.021746354177594185,
-0.014038577675819397,
-0.018290512263774872,
-0.06882172077894211,
0.0482197143137455,
-0.0346112959086895,
0.06777029484510422,
-0.0481... |
e4bd4eeb-3fa9-4bd0-92e9-bc4750ea5398 | All command-line options can be specified directly on the command line or as defaults in the
configuration file
.
General options {#command-line-options-general}
| Option | Description | Default |
|-----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|------------------------------|
|
-c [ -C, --config, --config-file ] <path-to-file>
| The location of the configuration file for the client, if it is not at one of the default locations. See
Configuration Files
. | - |
|
--help
| Print usage summary and exit. Combine with
--verbose
to display all possible options including query settings. | - |
|
--history_file <path-to-file>
| Path to a file containing the command history. | - |
|
--history_max_entries
| Maximum number of entries in the history file. |
1000000
(1 million) |
|
--prompt <prompt>
| Specify a custom prompt. | The
display_name
of the server |
|
--verbose
| Increase output verbosity. | - |
|
-V [ --version ]
| Print version and exit. | - |
Connection options {#command-line-options-connection} | {"source_file": "cli.md"} | [
0.04370155930519104,
-0.012054530903697014,
-0.04189188405871391,
-0.004053803626447916,
-0.06963342428207397,
0.02348865568637848,
-0.008911998942494392,
0.07399818301200867,
-0.060941897332668304,
0.013288108631968498,
0.03992617875337601,
-0.07928360998630524,
-0.0159973856061697,
-0.01... |
2ea8fb17-d9c4-4023-b62d-85d175319dab | | Option | Description | Default |
|----------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|
|
--connection <name>
| The name of preconfigured connection details from the configuration file. See
Connection credentials
. | - |
|
-d [ --database ] <database>
| Select the database to default to for this connection. | The current database from the server settings (
default
by default) |
|
-h [ --host ] <host>
| The hostname of the ClickHouse server to connect to. Can either be a hostname or an IPv4 or IPv6 address. Multiple hosts can be passed via multiple arguments. |
localhost
|
|
--jwt <value>
| Use JSON Web Token (JWT) for authentication.
Server JWT authorization is only available in ClickHouse Cloud. | - |
|
--no-warnings
| Disable showing warnings from
system.warnings | {"source_file": "cli.md"} | [
0.008085514418780804,
0.04951641336083412,
0.012752349488437176,
0.009898955933749676,
-0.023500390350818634,
0.058656468987464905,
0.023738451302051544,
0.04978695511817932,
0.033885810524225235,
-0.04546678811311722,
0.030632834881544113,
-0.05081891268491745,
-0.06312433630228043,
-0.03... |
3adf51e2-a047-44f5-bcee-7995dfcbbec2 | |
--no-warnings
| Disable showing warnings from
system.warnings
when the client connects to the server. | - |
|
--password <password>
| The password of the database user. You can also specify the password for a connection in the configuration file. If you do not specify the password, the client will ask for it. | - |
|
--port <port>
| The port the server is accepting connections on. The default ports are 9440 (TLS) and 9000 (no TLS).
Note: The client uses the native protocol and not HTTP(S). |
9440
if
--secure
is specified,
9000
otherwise. Always defaults to
9440
if the hostname ends in
.clickhouse.cloud
. |
|
-s [ --secure ]
| Whether to use TLS.
Enabled automatically when connecting to port 9440 (the default secure port) or ClickHouse Cloud.
You might need to configure your CA certificates in the
configuration file
. The available configuration settings are the same as for
server-side TLS configuration
. | Auto-enabled when connecting to port 9440 or ClickHouse Cloud |
|
--ssh-key-file <path-to-file>
| File containing the SSH private key for authenticate with the server. | - |
|
--ssh-key-passphrase <value>
| Passphrase for the SSH private key specified in
--ssh-key-file
. | - |
|
-u [ --user ] <username> | {"source_file": "cli.md"} | [
0.028339127078652382,
0.025207431986927986,
-0.08444366604089737,
-0.025344768539071083,
-0.06461135298013687,
-0.010800402611494064,
0.007524911779910326,
-0.06674303114414215,
0.027792884036898613,
0.006501446943730116,
0.018806176260113716,
-0.028680404648184776,
0.11269280314445496,
0.... |
ef1a4e2c-9b40-4891-862b-99491dc88e16 | |
-u [ --user ] <username>
| The database user to connect as. |
default
| | {"source_file": "cli.md"} | [
0.061583466827869415,
-0.04856265336275101,
-0.1342233121395111,
0.023352034389972687,
-0.12008446455001831,
0.023545581847429276,
0.06567459553480148,
0.01696069724857807,
-0.05582401156425476,
-0.017815610393881798,
-0.00043179048225283623,
-0.022017259150743484,
0.08745765686035156,
0.0... |
48daecc4-7d81-41f5-85c3-700e05aa3c8d | :::note
Instead of the
--host
,
--port
,
--user
and
--password
options, the client also supports
connection strings
.
:::
Query options {#command-line-options-query} | {"source_file": "cli.md"} | [
-0.009836310520768166,
0.03570496290922165,
-0.08455881476402283,
0.027128998190164566,
-0.08555518835783005,
-0.0016557660419493914,
0.046830374747514725,
-0.016201796010136604,
-0.001751182833686471,
0.04291160777211189,
0.014995785430073738,
-0.02460949495434761,
0.03899324685335159,
0.... |
bb791770-848b-4d46-a202-aa823166dc2a | | Option | Description |
|---------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
--param_<name>=<value>
| Substitution value for a parameter of a
query with parameters
. |
|
-q [ --query ] <query>
| The query to run in batch mode. Can be specified multiple times (
--query "SELECT 1" --query "SELECT 2"
) or once with multiple semicolon-separated queries (
--query "SELECT 1; SELECT 2;"
). In the latter case,
INSERT
queries with formats other than
VALUES
must be separated by empty lines.
A single query can also be specified without a parameter:
clickhouse-client "SELECT 1"
Cannot be used together with
--queries-file
. |
|
--queries-file <path-to-file>
| Path to a file containing queries.
--queries-file
can be specified multiple times, e.g.
--queries-file queries1.sql --queries-file queries2.sql
.
Cannot be used together with
--query
. |
|
-m [ --multiline ] | {"source_file": "cli.md"} | [
-0.011028045788407326,
0.06279325485229492,
0.01714795082807541,
0.009441141039133072,
-0.017996635288000107,
0.04520294815301895,
0.027811691164970398,
0.03883075714111328,
0.032635726034641266,
-0.060918912291526794,
0.03388090804219246,
-0.045555029064416885,
-0.060338523238897324,
-0.0... |
333ce2e6-af7f-4d4f-a4ff-3b80bea605a3 | |
-m [ --multiline ]
| If specified, allow multiline queries (do not send the query on Enter). Queries will be sent only when they are ended with a semicolon. | | {"source_file": "cli.md"} | [
0.025852181017398834,
0.02217964641749859,
-0.03437371551990509,
0.037515223026275635,
-0.10122101753950119,
0.027122387662529945,
0.038592662662267685,
-0.0345454178750515,
-0.06923625618219376,
0.016974017024040222,
0.06638577580451965,
-0.048547204583883286,
0.10149060934782028,
-0.0762... |
fd3d69f9-842f-4e5d-8557-d30d53e8ed44 | Query settings {#command-line-options-query-settings}
Query settings can be specified as command-line options in the client, for example:
bash
$ clickhouse-client --max_threads 1
See
Settings
for a list of settings.
Formatting options {#command-line-options-formatting}
| Option | Description | Default |
|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|
|
-f [ --format ] <format>
| Use the specified format to output the result.
See
Formats for Input and Output Data
for a list of supported formats. |
TabSeparated
|
|
--pager <command>
| Pipe all output into this command. Typically
less
(e.g.,
less -S
to display wide result sets) or similar. | - |
|
-E [ --vertical ]
| Use the
Vertical format
to output the result. This is the same as
β-format Vertical
. In this format, each value is printed on a separate line, which is helpful when displaying wide tables. | - |
Execution details {#command-line-options-execution-details} | {"source_file": "cli.md"} | [
0.04236239194869995,
0.0057577528059482574,
-0.09017624706029892,
0.07298646867275238,
-0.13503025472164154,
0.016110943630337715,
0.026389457285404205,
0.027538562193512917,
-0.03337302803993225,
0.009338674135506153,
0.0059517137706279755,
-0.05483730882406235,
0.05708364397287369,
-0.09... |
1407c129-f759-4be6-bfc7-b593b1009d01 | | Option | Description | Default |
|-----------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------|
|
--enable-progress-table-toggle
| Enable toggling of the progress table by pressing the control key (Space). Only applicable in interactive mode with progress table printing enabled. |
enabled
|
|
--hardware-utilization
| Print hardware utilization information in progress bar. | - |
|
--memory-usage
| If specified, print memory usage to
stderr
in non-interactive mode.
Possible values:
β’
none
- do not print memory usage
β’
default
- print number of bytes
β’
readable
- print memory usage in human-readable format | - |
|
--print-profile-events
| Print
ProfileEvents
packets. | - |
|
--progress
| Print progress of query execution.
Possible values:
β’
tty\|on\|1\|true\|yes
- outputs to the terminal in interactive mode
β’
err
- outputs to
stderr
in non-interactive mode
β’
off\|0\|false\|no
- disables progress printing |
tty
in interactive mode,
off
in non-interactive (batch) mode |
|
--progress-table
| Print a progress table with changing metrics during query execution. | {"source_file": "cli.md"} | [
0.008164131082594395,
0.0495578832924366,
0.012806816026568413,
0.009949604980647564,
-0.02345825545489788,
0.058637846261262894,
0.02380196750164032,
0.04966822266578674,
0.03391270339488983,
-0.045479606837034225,
0.030532585456967354,
-0.050821755081415176,
-0.06302782893180847,
-0.0359... |
953e74a6-4836-4d2d-9b40-b5abda28e393 | tty
in interactive mode,
off
in non-interactive (batch) mode |
|
--progress-table
| Print a progress table with changing metrics during query execution.
Possible values:
β’
tty\|on\|1\|true\|yes
- outputs to the terminal in interactive mode
β’
err
- outputs to
stderr
non-interactive mode
β’
off\|0\|false\|no
- disables the progress table |
tty
in interactive mode,
off
in non-interactive (batch) mode |
|
--stacktrace
| Print stack traces of exceptions. | - |
|
-t [ --time ]
| Print query execution time to
stderr
in non-interactive mode (for benchmarks). | - | | {"source_file": "cli.md"} | [
0.04738963022828102,
-0.004016192629933357,
-0.06908855587244034,
0.04289281740784645,
-0.0626324787735939,
-0.03480936959385872,
-0.007565890904515982,
0.05199496075510979,
-0.06177883222699165,
-0.01694416254758835,
0.036715175956487656,
-0.03305727615952492,
0.009871876798570156,
-0.033... |
bc7fb78d-0b2e-485b-a4f7-c31e3e7e2160 | description: 'Documentation for the Apache Arrow Flight interface in ClickHouse, allowing Flight SQL clients to connect to ClickHouse'
sidebar_label: 'Arrow Flight Interface'
sidebar_position: 26
slug: /interfaces/arrowflight
title: 'Arrow Flight Interface'
doc_type: 'reference'
Apache Arrow Flight Interface
ClickHouse supports integration with the
Apache Arrow Flight
protocol β a high-performance RPC framework designed for efficient columnar data transport using the Arrow IPC format over gRPC.
This interface allows Flight SQL clients to query ClickHouse and retrieve results in the Arrow format, providing high throughput and low latency for analytical workloads.
Features {#features}
Execute SQL queries via the Arrow Flight SQL protocol
Stream query results in Apache Arrow format
Integration with BI tools and custom data applications that support Arrow Flight
Lightweight and performant communication over gRPC
Limitations {#limitations}
The Arrow Flight interface is currently experimental and under active development. Known limitations include:
Limited support for complex ClickHouse-specific SQL features
Not all Arrow Flight SQL metadata operations are implemented yet
No built-in authentication or TLS configuration in the reference implementation
If you encounter compatibility issues or would like to contribute, please
create an issue
in the ClickHouse repository.
Running the Arrow Flight Server {#running-server}
To enable the Arrow Flight server in a self-managed ClickHouse instance, add the following configuration to your server config:
xml
<clickhouse>
<arrowflight_port>9005</arrowflight_port>
</clickhouse>
Restart the ClickHouse server. Upon successful startup, you should see a log message similar to:
bash
{} <Information> Application: Arrow Flight compatibility protocol: 0.0.0.0:9005
Connecting to ClickHouse via Arrow Flight SQL {#connecting-to-clickhouse}
You can use any client that supports Arrow Flight SQL. For example, using
pyarrow
:
```python
import pyarrow.flight
client = pyarrow.flight.FlightClient("grpc://localhost:9005")
ticket = pyarrow.flight.Ticket(b"SELECT number FROM system.numbers LIMIT 10")
reader = client.do_get(ticket)
for batch in reader:
print(batch.to_pandas())
```
Compatibility {#compatibility}
The Arrow Flight interface is compatible with tools that support Arrow Flight SQL including custom applications built with:
Python (
pyarrow
)
Java (
arrow-flight
)
C++ and other gRPC-compatible languages
If a native ClickHouse connector is available for your tool (e.g. JDBC, ODBC), prefer using it unless Arrow Flight is specifically required for performance or format compatibility.
Query Cancellation {#query-cancellation}
Long-running queries can be cancelled by closing the gRPC connection from the client. Support for more advanced cancellation features is planned.
For more details, see:
Apache Arrow Flight SQL specification | {"source_file": "arrowflight.md"} | [
0.008472534827888012,
-0.12181106209754944,
-0.08207271993160248,
0.02361905761063099,
-0.09271606802940369,
-0.04321008548140526,
0.04990432783961296,
-0.07716149836778641,
-0.07083743810653687,
-0.004173044580966234,
0.0047491854056715965,
0.04534098878502846,
-0.025427255779504776,
0.02... |
2fdd3f2d-93c8-4fd4-b073-6239aab1cf53 | For more details, see:
Apache Arrow Flight SQL specification
ClickHouse GitHub Issue #7554 | {"source_file": "arrowflight.md"} | [
0.06649751961231232,
-0.18786582350730896,
-0.04419994726777077,
0.04928434267640114,
-0.10080691426992416,
-0.03701893240213394,
0.012453723698854446,
-0.07542406767606735,
-0.056788988411426544,
0.012072376906871796,
0.036660779267549515,
0.04389694705605507,
-0.004325396381318569,
-0.01... |
b562bc2a-32ff-4b5e-9db3-2418ba73d192 | description: 'Documentation for the MySQL protocol interface in ClickHouse, allowing
MySQL clients to connect to ClickHouse'
sidebar_label: 'MySQL Interface'
sidebar_position: 25
slug: /interfaces/mysql
title: 'MySQL Interface'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import mysql0 from '@site/static/images/interfaces/mysql0.png';
import mysql1 from '@site/static/images/interfaces/mysql1.png';
import mysql2 from '@site/static/images/interfaces/mysql2.png';
import mysql3 from '@site/static/images/interfaces/mysql3.png';
MySQL Interface
ClickHouse supports the MySQL wire protocol. This allows certain clients that do not have native ClickHouse connectors leverage the MySQL protocol instead, and it has been validated with the following BI tools:
Looker Studio
Tableau Online
QuickSight
If you are trying other untested clients or integrations, keep in mind that there could be the following limitations:
SSL implementation might not be fully compatible; there could be potential
TLS SNI
issues.
A particular tool might require dialect features (e.g., MySQL-specific functions or settings) that are not implemented yet.
If there is a native driver available (e.g.,
DBeaver
), it is always preferred to use it instead of the MySQL interface. Additionally, while most of the MySQL language clients should work fine, MySQL interface is not guaranteed to be a drop-in replacement for a codebase with existing MySQL queries.
If your use case involves a particular tool that does not have a native ClickHouse driver, and you would like to use it via the MySQL interface and you found certain incompatibilities - please
create an issue
in the ClickHouse repository.
::::note
To support the SQL dialect of above BI tools better, ClickHouse's MySQL interface implicitly runs SELECT queries with setting
prefer_column_name_to_alias = 1
.
This cannot be turned off and it can lead in rare edge cases to different behavior between queries sent to ClickHouse's normal and MySQL query interfaces.
::::
Enabling the MySQL Interface On ClickHouse Cloud {#enabling-the-mysql-interface-on-clickhouse-cloud}
After creating your ClickHouse Cloud Service, click the
Connect
button.
Change the
Connect with
drop-down to
MySQL
.
Toggle the switch to enable the MySQL interface for this specific service. This will expose port
3306
for this service and prompt you with your MySQL connection screen that include your unique MySQL username. The password will be the same as the service's default user password.
Copy the MySQL connection string shown.
Creating multiple MySQL users in ClickHouse Cloud {#creating-multiple-mysql-users-in-clickhouse-cloud} | {"source_file": "mysql.md"} | [
-0.012838556431233883,
-0.04711594060063362,
-0.03240454941987991,
-0.008181287907063961,
-0.03509078547358513,
-0.047563403844833374,
0.0324600450694561,
0.024813024327158928,
-0.04466364160180092,
-0.015359368175268173,
0.0666419193148613,
-0.013365386985242367,
0.1854294240474701,
0.030... |
ca67a958-7fba-480a-8dfd-2f6f63126b40 | Copy the MySQL connection string shown.
Creating multiple MySQL users in ClickHouse Cloud {#creating-multiple-mysql-users-in-clickhouse-cloud}
By default, there is a built-in
mysql4<subdomain>
user, which uses the same password as the
default
one. The
<subdomain>
part is the first segment of your ClickHouse Cloud hostname. This format is necessary to work with the tools that implement secure connection, but don't provide
SNI information in their TLS handshake
, which makes it impossible to do the internal routing without an extra hint in the username (MySQL console client is one of such tools).
Because of this, we
highly recommend
following the
mysql4<subdomain>_<username>
format when creating a new user intended to be used with the MySQL interface, where
<subdomain>
is a hint to identify your Cloud service, and
<username>
is an arbitrary suffix of your choice.
:::tip
For ClickHouse Cloud hostname like
foobar.us-east1.aws.clickhouse.cloud
, the
<subdomain>
part equals to
foobar
, and a custom MySQL username could look like
mysql4foobar_team1
.
:::
You can create extra users to use with the MySQL interface if, for example, you need to apply extra settings.
Optional - create a
settings profile
to apply for your custom user. For example,
my_custom_profile
with an extra setting which will be applied by default when we connect with the user we create later:
sql
CREATE SETTINGS PROFILE my_custom_profile SETTINGS prefer_column_name_to_alias=1;
prefer_column_name_to_alias
is used just as an example, you can use other settings there.
2.
Create a user
using the following format:
mysql4<subdomain>_<username>
(
see above
). The password must be in double SHA1 format. For example:
sql
CREATE USER mysql4foobar_team1 IDENTIFIED WITH double_sha1_password BY 'YourPassword42$';
or if you want to use a custom profile for this user:
sql
CREATE USER mysql4foobar_team1 IDENTIFIED WITH double_sha1_password BY 'YourPassword42$' SETTINGS PROFILE 'my_custom_profile';
where
my_custom_profile
is the name of the profile you created earlier.
3.
Grant
the new user the necessary permissions to interact with the desired tables or databases. For example, if you want to grant access to
system.query_log
only:
sql
GRANT SELECT ON system.query_log TO mysql4foobar_team1;
Use the created user to connect to your ClickHouse Cloud service with the MySQL interface.
Troubleshooting multiple MySQL users in ClickHouse Cloud {#troubleshooting-multiple-mysql-users-in-clickhouse-cloud}
If you created a new MySQL user, and you see the following error while connecting via MySQL CLI client:
sql
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading authorization packet', system error: 54
In this case, ensure that the username follows the
mysql4<subdomain>_<username>
format, as described (
above
). | {"source_file": "mysql.md"} | [
0.021236365661025047,
-0.07199092209339142,
-0.028317194432020187,
-0.05562954023480415,
-0.1497218757867813,
-0.0775427520275116,
0.1061846986413002,
-0.04178718104958534,
0.03753412514925003,
0.027560267597436905,
-0.009081834927201271,
-0.09951591491699219,
0.18546603620052338,
-0.01175... |
846f8492-7ec6-415d-9b44-298b57775b8f | In this case, ensure that the username follows the
mysql4<subdomain>_<username>
format, as described (
above
).
Enabling the MySQL interface on self-managed ClickHouse {#enabling-the-mysql-interface-on-self-managed-clickhouse}
Add the
mysql_port
setting to your server's configuration file. For example, you could define the port in a new XML file in your
config.d/
folder
:
xml
<clickhouse>
<mysql_port>9004</mysql_port>
</clickhouse>
Startup your ClickHouse server and look for a log message similar to the following that mentions Listening for MySQL compatibility protocol:
bash
{} <Information> Application: Listening for MySQL compatibility protocol: 127.0.0.1:9004
Connect MySQL to ClickHouse {#connect-mysql-to-clickhouse}
The following command demonstrates how to connect the MySQL client
mysql
to ClickHouse:
bash
mysql --protocol tcp -h [hostname] -u [username] -P [port_number] [database_name]
For example:
bash
$ mysql --protocol tcp -h 127.0.0.1 -u default -P 9004 default
Output if a connection succeeded:
```text
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 20.2.1.1-ClickHouse
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
```
For compatibility with all MySQL clients, it is recommended to specify user password with
double SHA1
in configuration file.
If user password is specified using
SHA256
, some clients won't be able to authenticate (mysqljs and old versions of command-line tool MySQL and MariaDB).
Restrictions:
prepared queries are not supported
some data types are sent as strings
To cancel a long query use
KILL QUERY connection_id
statement (it is replaced with
KILL QUERY WHERE query_id = connection_id
while proceeding). For example:
bash
$ mysql --protocol tcp -h mysql_server -P 9004 default -u default --password=123 -e "KILL QUERY 123456;" | {"source_file": "mysql.md"} | [
0.08046510815620422,
-0.07151277363300323,
-0.027433907613158226,
-0.07649305462837219,
-0.12459451705217361,
-0.0806899145245552,
0.04621275141835213,
0.014222442172467709,
-0.06543664634227753,
-0.020804332569241524,
-0.0012389793992042542,
-0.06958318501710892,
0.10485170036554337,
0.04... |
d1b1874b-0f8f-4d2a-ac62-4eeebf4ee9ab | description: 'Documentation for the SSH interface in ClickHouse'
keywords: ['client', 'ssh', 'putty']
sidebar_label: 'SSH Interface'
sidebar_position: 60
slug: /interfaces/ssh
title: 'SSH Interface'
doc_type: 'reference'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
SSH interface with PTY
Preface {#preface}
ClickHouse server allows to connect to itself directly using the SSH protocol. Any client is allowed.
After creating a
database user identified by an SSH key
:
sql
CREATE USER abcuser IDENTIFIED WITH ssh_key BY KEY '<REDACTED>' TYPE 'ssh-ed25519';
You are able to use this key to connect to a ClickHouse server. It will open a pseudoterminal (PTY) with an interactive session of clickhouse-client.
```bash
ssh -i ~/test_ssh/id_ed25519 abcuser@localhost -p 9022
ClickHouse embedded version 25.1.1.1.
ip-10-1-13-116.us-west-2.compute.internal :) SELECT 1;
SELECT 1
Query id: cdd91b7f-215b-4537-b7df-86d19bf63f64
ββ1ββ
1. β 1 β
βββββ
1 row in set. Elapsed: 0.002 sec.
```
The command execution over SSH (the non-interactive mode) is also supported:
```bash
ssh -i ~/test_ssh/id_ed25519 abcuser@localhost -p 9022 "select 1"
1
```
Server configuration {#server-configuration}
In order to enable the SSH server capability, you need to uncomment or place the following section in your
config.xml
:
```xml
9022
path-to-the-key
```
The host key is an integral part of a SSH protocol. The public part of this key is stored in the
~/.ssh/known_hosts
file on the client side and typically needed to prevent man-in-the-middle type of attacks. When connecting to the server for the first time you will see the message below:
shell
The authenticity of host '[localhost]:9022 ([127.0.0.1]:9022)' can't be established.
RSA key fingerprint is SHA256:3qxVlJKMr/PEKw/hfeg06HAK451Tt0eenhwqQvh58Do.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])?
This, in fact means: "Do you want to remember the public key of this host and continue connecting?".
You can tell your SSH client not to verify the host by passing an option:
bash
ssh -o "StrictHostKeyChecking no" user@host
Configuring embedded client {#configuring-embedded-client}
You are able to pass options to an embedded client similar to the ordinary
clickhouse-client
, but with a few limitations.
Since this is an SSH protocol, the only way to pass parameters to the target host is through environment variables.
For example setting the
format
can be done this way:
```bash
ssh -o SetEnv="format=Pretty" -i ~/test_ssh/id_ed25519 abcuser@localhost -p 9022 "SELECT 1"
βββββ
β 1 β
β‘ββββ©
1. β 1 β
βββββ
```
You are able to change any user-level setting this way and additionally pass most of the ordinary
clickhouse-client
options (except ones which don't make sense in this setup.) | {"source_file": "ssh.md"} | [
0.008662298321723938,
-0.03296290338039398,
-0.07676459848880768,
-0.012846534140408039,
-0.11166441440582275,
0.04505544155836105,
0.02304367534816265,
-0.030576210469007492,
-0.05230580270290375,
-0.016514906659722328,
0.047733038663864136,
-0.020690850913524628,
0.11049238592386246,
0.0... |
2765feab-f615-4bcb-bd1d-79ee20ada0f6 | You are able to change any user-level setting this way and additionally pass most of the ordinary
clickhouse-client
options (except ones which don't make sense in this setup.)
Important:
In case if both
query
option and the SSH command is passed, the latter one is added to the list of queries to execute:
bash
ubuntu ip-10-1-13-116@~$ ssh -o SetEnv="format=Pretty query=\"SELECT 2;\"" -i ~/test_ssh/id_ed25519 abcuser@localhost -p 9022 "SELECT 1"
βββββ
β 2 β
β‘ββββ©
1. β 2 β
βββββ
βββββ
β 1 β
β‘ββββ©
1. β 1 β
βββββ | {"source_file": "ssh.md"} | [
0.045736491680145264,
-0.01492569874972105,
0.001184425433166325,
0.02336833067238331,
-0.08149168640375137,
0.009394550696015358,
0.04520360007882118,
-0.030897904187440872,
-0.06183630973100662,
-0.028206434100866318,
0.03597874194383621,
-0.04277132451534271,
0.059859633445739746,
-0.01... |
ee42036d-852e-4d6f-bbae-44246bd20a4f | description: 'Documentation for the PostgreSQL wire protocol interface in ClickHouse'
sidebar_label: 'PostgreSQL Interface'
sidebar_position: 20
slug: /interfaces/postgresql
title: 'PostgreSQL Interface'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
PostgreSQL Interface
ClickHouse supports the PostgreSQL wire protocol, which allows you to use Postgres clients to connect to ClickHouse. In a sense, ClickHouse can pretend to be a PostgreSQL instance - allowing you to connect a PostgreSQL client application to ClickHouse that is not already directly supported by ClickHouse (for example, Amazon Redshift).
To enable the PostgreSQL wire protocol, add the
postgresql_port
setting to your server's configuration file. For example, you could define the port in a new XML file in your
config.d
folder:
xml
<clickhouse>
<postgresql_port>9005</postgresql_port>
</clickhouse>
Startup your ClickHouse server and look for a log message similar to the following that mentions
Listening for PostgreSQL compatibility protocol
:
response
{} <Information> Application: Listening for PostgreSQL compatibility protocol: 127.0.0.1:9005
Connect psql to ClickHouse {#connect-psql-to-clickhouse}
The following command demonstrates how to connect the PostgreSQL client
psql
to ClickHouse:
bash
psql -p [port] -h [hostname] -U [username] [database_name]
For example:
bash
psql -p 9005 -h 127.0.0.1 -U alice default
:::note
The
psql
client requires a login with a password, so you will not be able connect using the
default
user with no password. Either assign a password to the
default
user, or login as a different user.
:::
The
psql
client prompts for the password:
```response
Password for user alice:
psql (14.2, server 22.3.1.1)
WARNING: psql major version 14, server major version 22.
Some psql features might not work.
Type "help" for help.
default=>
```
And that's it! You now have a PostgreSQL client connected to ClickHouse, and all commands and queries are executed on ClickHouse.
:::note
The PostgreSQL protocol currently only supports plain-text passwords.
:::
Using SSL {#using-ssl}
If you have SSL/TLS configured on your ClickHouse instance, then
postgresql_port
will use the same settings (the port is shared for both secure and insecure clients).
Each client has their own method of how to connect using SSL. The following command demonstrates how to pass in the certificates and key to securely connect
psql
to ClickHouse:
bash
psql "port=9005 host=127.0.0.1 user=alice dbname=default sslcert=/path/to/certificate.pem sslkey=/path/to/key.pem sslrootcert=/path/to/rootcert.pem sslmode=verify-ca"
Configuring ClickHouse user authentication with SCRAM-SHA-256 {#using-scram-sha256} | {"source_file": "postgresql.md"} | [
-0.04586120694875717,
-0.0619618259370327,
-0.05316135659813881,
-0.005323733668774366,
-0.0826440081000328,
0.04660937190055847,
0.011474359780550003,
-0.07474631816148758,
-0.029303397983312607,
-0.023370226845145226,
-0.005434051156044006,
0.02885199338197708,
-0.02327474020421505,
-0.0... |
a15b28f1-e82b-4772-ac34-0996d0ec4124 | Configuring ClickHouse user authentication with SCRAM-SHA-256 {#using-scram-sha256}
To ensure secure user authentication in ClickHouse, it is recommended to use the SCRAM-SHA-256 protocol. Configure the user by specifying the
password_scram_sha256_hex
element in the users.xml file. The password hash must be generated with num_iterations=4096.
Ensure that the psql client supports and negotiates SCRAM-SHA-256 during connection.
Example configuration for user
user_with_sha256
with the password
abacaba
:
xml
<user_with_sha256>
<password_scram_sha256_hex>04e7a70338d7af7bb6142fe7e19fef46d9b605f3e78b932a60e8200ef9154976</password_scram_sha256_hex>
</user_with_sha256>
View the
PostgreSQL docs
for more details on their SSL settings. | {"source_file": "postgresql.md"} | [
0.02486373856663704,
-0.028359949588775635,
-0.14809678494930267,
-0.08278346061706543,
-0.08584640175104141,
0.04096674174070358,
0.04081381484866142,
-0.029411306604743004,
-0.017490778118371964,
-0.036442290991544724,
-0.0028572010342031717,
0.03983364626765251,
0.02621326595544815,
-0.... |
d8198593-f09c-42f9-9db8-c38cb978e56a | description: 'Documentation for the gRPC interface in ClickHouse'
sidebar_label: 'gRPC Interface'
sidebar_position: 25
slug: /interfaces/grpc
title: 'gRPC Interface'
doc_type: 'reference'
gRPC Interface
Introduction {#grpc-interface-introduction}
ClickHouse supports
gRPC
interface. It is an open source remote procedure call system that uses HTTP/2 and
Protocol Buffers
. The implementation of gRPC in ClickHouse supports:
SSL;
authentication;
sessions;
compression;
parallel queries through the same channel;
cancellation of queries;
getting progress and logs;
external tables.
The specification of the interface is described in
clickhouse_grpc.proto
.
gRPC configuration {#grpc-interface-configuration}
To use the gRPC interface set
grpc_port
in the main
server configuration
. Other configuration options see in the following example:
```xml
9100
false
<!-- The following two files are used only if SSL is enabled -->
<ssl_cert_file>/path/to/ssl_cert_file</ssl_cert_file>
<ssl_key_file>/path/to/ssl_key_file</ssl_key_file>
<!-- Whether server requests client for a certificate -->
<ssl_require_client_auth>false</ssl_require_client_auth>
<!-- The following file is used only if ssl_require_client_auth=true -->
<ssl_ca_cert_file>/path/to/ssl_ca_cert_file</ssl_ca_cert_file>
<!-- Default compression algorithm (applied if client doesn't specify another algorithm, see result_compression in QueryInfo).
Supported algorithms: none, deflate, gzip, stream_gzip -->
<compression>deflate</compression>
<!-- Default compression level (applied if client doesn't specify another level, see result_compression in QueryInfo).
Supported levels: none, low, medium, high -->
<compression_level>medium</compression_level>
<!-- Send/receive message size limits in bytes. -1 means unlimited -->
<max_send_message_size>-1</max_send_message_size>
<max_receive_message_size>-1</max_receive_message_size>
<!-- Enable if you want to get detailed logs -->
<verbose_logs>false</verbose_logs>
</grpc>
```
Built-in client {#grpc-client}
You can write a client in any of the programming languages supported by gRPC using the provided
specification
.
Or you can use a built-in Python client. It is placed in
utils/grpc-client/clickhouse-grpc-client.py
in the repository. The built-in client requires
grpcio and grpcio-tools
Python modules.
The client supports the following arguments:
--help
β Shows a help message and exits.
--host HOST, -h HOST
β A server name. Default value:
localhost
. You can use IPv4 or IPv6 addresses also.
--port PORT
β A port to connect to. This port should be enabled in the ClickHouse server configuration (see
grpc_port
). Default value:
9100
.
--user USER_NAME, -u USER_NAME
β A user name. Default value:
default
.
--password PASSWORD
β A password. Default value: empty string. | {"source_file": "grpc.md"} | [
-0.03399138152599335,
-0.037449710071086884,
-0.06755359470844269,
-0.052380166947841644,
-0.07384134829044342,
-0.0859210193157196,
0.009834131225943565,
-0.009674441069364548,
-0.013863371685147285,
-0.06397240608930588,
0.04444921761751175,
0.03139670565724373,
-0.01882067695260048,
0.0... |
8962b894-a8f5-41e2-bbab-5c63a2101418 | --user USER_NAME, -u USER_NAME
β A user name. Default value:
default
.
--password PASSWORD
β A password. Default value: empty string.
--query QUERY, -q QUERY
β A query to process when using non-interactive mode.
--database DATABASE, -d DATABASE
β A default database. If not specified, the current database set in the server settings is used (
default
by default).
--format OUTPUT_FORMAT, -f OUTPUT_FORMAT
β A result output
format
. Default value for interactive mode:
PrettyCompact
.
--debug
β Enables showing debug information.
To run the client in an interactive mode call it without
--query
argument.
In a batch mode query data can be passed via
stdin
.
Client Usage Example
In the following example a table is created and loaded with data from a CSV file. Then the content of the table is queried.
```bash
./clickhouse-grpc-client.py -q "CREATE TABLE grpc_example_table (id UInt32, text String) ENGINE = MergeTree() ORDER BY id;"
echo -e "0,Input data for\n1,gRPC protocol example" > a.csv
cat a.csv | ./clickhouse-grpc-client.py -q "INSERT INTO grpc_example_table FORMAT CSV"
./clickhouse-grpc-client.py --format PrettyCompact -q "SELECT * FROM grpc_example_table;"
```
Result:
text
ββidββ¬βtextβββββββββββββββββββ
β 0 β Input data for β
β 1 β gRPC protocol example β
ββββββ΄ββββββββββββββββββββββββ | {"source_file": "grpc.md"} | [
0.018875602632761,
0.00580764701589942,
-0.12321019917726517,
0.052331045269966125,
-0.16125698387622833,
-0.08433374017477036,
0.1042909175157547,
0.10102622210979462,
-0.06326333433389664,
-0.017190152779221535,
0.0029862727969884872,
-0.028337152674794197,
0.07205640524625778,
-0.038848... |
b66909d3-9319-40af-826f-b0498dc4888b | description: 'Documentation for the ClickHouse ODBC driver'
sidebar_label: 'ODBC Driver'
sidebar_position: 35
slug: /interfaces/odbc
title: 'ODBC Driver'
doc_type: 'reference'
ODBC driver
Use the
official ODBC driver
for accessing ClickHouse as a data source. | {"source_file": "odbc.md"} | [
-0.021967189386487007,
-0.06430814415216446,
-0.09608671069145203,
0.06710316985845566,
0.012930570170283318,
-0.06081191822886467,
0.020773939788341522,
0.011351842433214188,
-0.045064542442560196,
-0.09390515834093094,
0.005371682345867157,
0.024462129920721054,
0.002901140134781599,
-0.... |
ce9354f2-1ea8-451d-9c7e-b53fca711f50 | description: 'Guide to using the JDBC driver for connecting to ClickHouse from Java
applications'
sidebar_label: 'JDBC Driver'
sidebar_position: 20
slug: /interfaces/jdbc
title: 'JDBC Driver'
doc_type: 'guide'
JDBC driver
Use the
official JDBC driver
(and Java client) to access ClickHouse from your Java applications. | {"source_file": "jdbc.md"} | [
0.002926945686340332,
-0.02371189184486866,
-0.06627518683671951,
-0.06023739278316498,
-0.06809141486883163,
0.006478547118604183,
0.02767322026193142,
0.011709156446158886,
-0.12009014189243317,
-0.10662185400724411,
-0.011582620441913605,
0.0032059524673968554,
0.004647386260330677,
-0.... |
0ec39a06-764e-4483-b15a-c9379b693992 | description: 'Documentation for the ClickHouse C++ client library and integration
with u-server framework'
sidebar_label: 'C++ Client Library'
sidebar_position: 24
slug: /interfaces/cpp
title: 'C++ Client Library'
doc_type: 'reference'
C++ client library
See README at
clickhouse-cpp
repository.
Userver asynchronous framework
userver (beta)
has builtin support for ClickHouse. | {"source_file": "cpp.md"} | [
-0.07746541500091553,
-0.006849118508398533,
-0.03091328777372837,
-0.042064737528562546,
-0.023894552141427994,
0.020097773522138596,
-0.024030884727835655,
0.011422349140048027,
-0.04061443358659744,
-0.06948959827423096,
0.007746698800474405,
0.04114867374300957,
-0.012789687141776085,
... |
1dc53288-aa7f-4e8e-8100-7bc58ebd7164 | description: 'Page describing automatic schema inference from input data in ClickHouse'
sidebar_label: 'Schema inference'
slug: /interfaces/schema-inference
title: 'Automatic schema inference from input data'
doc_type: 'reference'
ClickHouse can automatically determine the structure of input data in almost all supported
Input formats
.
This document will describe when schema inference is used, how it works with different input formats and which settings
can control it.
Usage {#usage}
Schema inference is used when ClickHouse needs to read the data in a specific data format and the structure is unknown.
Table functions
file
,
s3
,
url
,
hdfs
,
azureBlobStorage
. {#table-functions-file-s3-url-hdfs-azureblobstorage}
These table functions have the optional argument
structure
with the structure of input data. If this argument is not specified or set to
auto
, the structure will be inferred from the data.
Example:
Let's say we have a file
hobbies.jsonl
in JSONEachRow format in the
user_files
directory with this content:
json
{"id" : 1, "age" : 25, "name" : "Josh", "hobbies" : ["football", "cooking", "music"]}
{"id" : 2, "age" : 19, "name" : "Alan", "hobbies" : ["tennis", "art"]}
{"id" : 3, "age" : 32, "name" : "Lana", "hobbies" : ["fitness", "reading", "shopping"]}
{"id" : 4, "age" : 47, "name" : "Brayan", "hobbies" : ["movies", "skydiving"]}
ClickHouse can read this data without you specifying its structure:
sql
SELECT * FROM file('hobbies.jsonl')
response
ββidββ¬βageββ¬βnameββββ¬βhobbiesβββββββββββββββββββββββββββ
β 1 β 25 β Josh β ['football','cooking','music'] β
β 2 β 19 β Alan β ['tennis','art'] β
β 3 β 32 β Lana β ['fitness','reading','shopping'] β
β 4 β 47 β Brayan β ['movies','skydiving'] β
ββββββ΄ββββββ΄βββββββββ΄βββββββββββββββββββββββββββββββββββ
Note: the format
JSONEachRow
was automatically determined by the file extension
.jsonl
.
You can see an automatically determined structure using the
DESCRIBE
query:
sql
DESCRIBE file('hobbies.jsonl')
response
ββnameβββββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β id β Nullable(Int64) β β β β β β
β age β Nullable(Int64) β β β β β β
β name β Nullable(String) β β β β β β
β hobbies β Array(Nullable(String)) β β β β β β
βββββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Table engines
File
,
S3
,
URL
,
HDFS
,
azureBlobStorage
{#table-engines-file-s3-url-hdfs-azureblobstorage} | {"source_file": "schema-inference.md"} | [
-0.038562431931495667,
-0.010388338938355446,
-0.05915476009249687,
0.031356845051050186,
0.03419123589992523,
0.02493097074329853,
-0.019408205524086952,
-0.017901845276355743,
-0.0261786300688982,
0.04869306832551956,
0.040575943887233734,
-0.022694507613778114,
0.06298212707042694,
0.06... |
50a97d7c-2f70-4389-89a2-41095c4c6247 | Table engines
File
,
S3
,
URL
,
HDFS
,
azureBlobStorage
{#table-engines-file-s3-url-hdfs-azureblobstorage}
If the list of columns is not specified in
CREATE TABLE
query, the structure of the table will be inferred automatically from the data.
Example:
Let's use the file
hobbies.jsonl
. We can create a table with engine
File
with the data from this file:
sql
CREATE TABLE hobbies ENGINE=File(JSONEachRow, 'hobbies.jsonl')
response
Ok.
sql
SELECT * FROM hobbies
response
ββidββ¬βageββ¬βnameββββ¬βhobbiesβββββββββββββββββββββββββββ
β 1 β 25 β Josh β ['football','cooking','music'] β
β 2 β 19 β Alan β ['tennis','art'] β
β 3 β 32 β Lana β ['fitness','reading','shopping'] β
β 4 β 47 β Brayan β ['movies','skydiving'] β
ββββββ΄ββββββ΄βββββββββ΄βββββββββββββββββββββββββββββββββββ
sql
DESCRIBE TABLE hobbies
response
ββnameβββββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β id β Nullable(Int64) β β β β β β
β age β Nullable(Int64) β β β β β β
β name β Nullable(String) β β β β β β
β hobbies β Array(Nullable(String)) β β β β β β
βββββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
clickhouse-local {#clickhouse-local}
clickhouse-local
has an optional parameter
-S/--structure
with the structure of input data. If this parameter is not specified or set to
auto
, the structure will be inferred from the data.
Example:
Let's use the file
hobbies.jsonl
. We can query the data from this file using
clickhouse-local
:
shell
clickhouse-local --file='hobbies.jsonl' --table='hobbies' --query='DESCRIBE TABLE hobbies'
response
id Nullable(Int64)
age Nullable(Int64)
name Nullable(String)
hobbies Array(Nullable(String))
shell
clickhouse-local --file='hobbies.jsonl' --table='hobbies' --query='SELECT * FROM hobbies'
response
1 25 Josh ['football','cooking','music']
2 19 Alan ['tennis','art']
3 32 Lana ['fitness','reading','shopping']
4 47 Brayan ['movies','skydiving']
Using structure from insertion table {#using-structure-from-insertion-table}
When table functions
file/s3/url/hdfs
are used to insert data into a table,
there is an option to use the structure from the insertion table instead of extracting it from the data.
It can improve insertion performance because schema inference can take some time. Also, it will be helpful when the table has an optimized schema, so
no conversions between types will be performed.
There is a special setting
use_structure_from_insertion_table_in_table_functions | {"source_file": "schema-inference.md"} | [
-0.013218284584581852,
-0.04772653430700302,
-0.08730214834213257,
0.05657666176557541,
0.018177565187215805,
0.005531894974410534,
0.022412260994315147,
-0.020254023373126984,
-0.03185294196009636,
0.07185472548007965,
0.07632546871900558,
-0.05677887424826622,
0.05755646899342537,
-0.009... |
aefa386e-d3e7-4f03-a04f-78a4bc0ac9df | There is a special setting
use_structure_from_insertion_table_in_table_functions
that controls this behaviour. It has 3 possible values:
- 0 - table function will extract the structure from the data.
- 1 - table function will use the structure from the insertion table.
- 2 - ClickHouse will automatically determine if it's possible to use the structure from the insertion table or use schema inference. Default value.
Example 1:
Let's create table
hobbies1
with the next structure:
sql
CREATE TABLE hobbies1
(
`id` UInt64,
`age` LowCardinality(UInt8),
`name` String,
`hobbies` Array(String)
)
ENGINE = MergeTree
ORDER BY id;
And insert data from the file
hobbies.jsonl
:
sql
INSERT INTO hobbies1 SELECT * FROM file(hobbies.jsonl)
In this case, all columns from the file are inserted into the table without changes, so ClickHouse will use the structure from the insertion table instead of schema inference.
Example 2:
Let's create table
hobbies2
with the next structure:
sql
CREATE TABLE hobbies2
(
`id` UInt64,
`age` LowCardinality(UInt8),
`hobbies` Array(String)
)
ENGINE = MergeTree
ORDER BY id;
And insert data from the file
hobbies.jsonl
:
sql
INSERT INTO hobbies2 SELECT id, age, hobbies FROM file(hobbies.jsonl)
In this case, all columns in the
SELECT
query are present in the table, so ClickHouse will use the structure from the insertion table.
Note that it will work only for input formats that support reading a subset of columns like JSONEachRow, TSKV, Parquet, etc. (so it won't work for example for TSV format).
Example 3:
Let's create table
hobbies3
with the next structure:
sql
CREATE TABLE hobbies3
(
`identifier` UInt64,
`age` LowCardinality(UInt8),
`hobbies` Array(String)
)
ENGINE = MergeTree
ORDER BY identifier;
And insert data from the file
hobbies.jsonl
:
sql
INSERT INTO hobbies3 SELECT id, age, hobbies FROM file(hobbies.jsonl)
In this case, column
id
is used in the
SELECT
query, but the table doesn't have this column (it has a column with the name
identifier
),
so ClickHouse cannot use the structure from the insertion table, and schema inference will be used.
Example 4:
Let's create table
hobbies4
with the next structure:
sql
CREATE TABLE hobbies4
(
`id` UInt64,
`any_hobby` Nullable(String)
)
ENGINE = MergeTree
ORDER BY id;
And insert data from the file
hobbies.jsonl
:
sql
INSERT INTO hobbies4 SELECT id, empty(hobbies) ? NULL : hobbies[1] FROM file(hobbies.jsonl)
In this case, there are some operations performed on the column
hobbies
in the
SELECT
query to insert it into the table, so ClickHouse cannot use the structure from the insertion table, and schema inference will be used.
Schema inference cache {#schema-inference-cache} | {"source_file": "schema-inference.md"} | [
-0.016889432445168495,
-0.042252302169799805,
-0.037762291729450226,
0.008384970016777515,
-0.03373701125383377,
-0.0465170219540596,
-0.013378117233514786,
0.014306862838566303,
-0.01791902631521225,
0.04236527532339096,
0.10862656682729721,
-0.01392530556768179,
0.032217320054769516,
0.0... |
e2cd6c00-e952-4fb1-aa3a-427ea0ace8dc | Schema inference cache {#schema-inference-cache}
For most input formats schema inference reads some data to determine its structure and this process can take some time.
To prevent inferring the same schema every time ClickHouse read the data from the same file, the inferred schema is cached and when accessing the same file again, ClickHouse will use the schema from the cache.
There are special settings that control this cache:
-
schema_inference_cache_max_elements_for_{file/s3/hdfs/url/azure}
- the maximum number of cached schemas for the corresponding table function. The default value is
4096
. These settings should be set in the server config.
-
schema_inference_use_cache_for_{file,s3,hdfs,url,azure}
- allows turning on/off using cache for schema inference. These settings can be used in queries.
The schema of the file can be changed by modifying the data or by changing format settings.
For this reason, the schema inference cache identifies the schema by file source, format name, used format settings, and the last modification time of the file.
Note: some files accessed by url in
url
table function may not contain information about the last modification time; for this case, there is a special setting
schema_inference_cache_require_modification_time_for_url
. Disabling this setting allows the use of the schema from cache without the last modification time for such files.
There is also a system table
schema_inference_cache
with all current schemas in cache and system query
SYSTEM DROP SCHEMA CACHE [FOR File/S3/URL/HDFS]
that allows cleaning the schema cache for all sources, or for a specific source.
Examples:
Let's try to infer the structure of a sample dataset from s3
github-2022.ndjson.gz
and see how the schema inference cache works:
sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/github/github-2022.ndjson.gz')
SETTINGS allow_experimental_object_type = 1
```response
ββnameββββββββ¬βtypeββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β type β Nullable(String) β β β β β β
β actor β Object(Nullable('json')) β β β β β β
β repo β Object(Nullable('json')) β β β β β β
β created_at β Nullable(String) β β β β β β
β payload β Object(Nullable('json')) β β β β β β
ββββββββββββββ΄βββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
5 rows in set. Elapsed: 0.601 sec. | {"source_file": "schema-inference.md"} | [
0.01219071727246046,
-0.05446343496441841,
-0.05848618969321251,
0.021052764728665352,
0.04784514755010605,
-0.027429621666669846,
-0.01592533104121685,
-0.07567042112350464,
0.07060879468917847,
0.1011289432644844,
0.005954187363386154,
0.02797504886984825,
0.0450882650911808,
-0.00936166... |
876705b8-7eab-40ee-b86b-c7a632ea5d9b | 5 rows in set. Elapsed: 0.601 sec.
sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/github/github-2022.ndjson.gz')
SETTINGS allow_experimental_object_type = 1
response
ββnameββββββββ¬βtypeββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β type β Nullable(String) β β β β β β
β actor β Object(Nullable('json')) β β β β β β
β repo β Object(Nullable('json')) β β β β β β
β created_at β Nullable(String) β β β β β β
β payload β Object(Nullable('json')) β β β β β β
ββββββββββββββ΄βββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
5 rows in set. Elapsed: 0.059 sec.
```
As you can see, the second query succeeded almost instantly.
Let's try to change some settings that can affect inferred schema:
```sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/github/github-2022.ndjson.gz')
SETTINGS input_format_json_read_objects_as_strings = 1
ββnameββββββββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β type β Nullable(String) β β β β β β
β actor β Nullable(String) β β β β β β
β repo β Nullable(String) β β β β β β
β created_at β Nullable(String) β β β β β β
β payload β Nullable(String) β β β β β β
ββββββββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
5 rows in set. Elapsed: 0.611 sec
```
As you can see, the schema from the cache was not used for the same file, because the setting that can affect inferred schema was changed.
Let's check the content of
system.schema_inference_cache
table:
sql
SELECT schema, format, source FROM system.schema_inference_cache WHERE storage='S3' | {"source_file": "schema-inference.md"} | [
-0.04706132784485817,
-0.05893116816878319,
-0.0064583574421703815,
0.062425922602415085,
0.01217863243073225,
-0.09112878888845444,
-0.03923184424638748,
-0.054235849529504776,
-0.008251315914094448,
0.023868849501013756,
0.08051912486553192,
-0.05138023570179939,
0.0014458760851994157,
-... |
e82b2cfd-5416-47a9-845c-997f096ff770 | Let's check the content of
system.schema_inference_cache
table:
sql
SELECT schema, format, source FROM system.schema_inference_cache WHERE storage='S3'
response
ββschemaβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βformatββ¬βsourceββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β type Nullable(String), actor Object(Nullable('json')), repo Object(Nullable('json')), created_at Nullable(String), payload Object(Nullable('json')) β NDJSON β datasets-documentation.s3.eu-west-3.amazonaws.com443/datasets-documentation/github/github-2022.ndjson.gz β
β type Nullable(String), actor Nullable(String), repo Nullable(String), created_at Nullable(String), payload Nullable(String) β NDJSON β datasets-documentation.s3.eu-west-3.amazonaws.com443/datasets-documentation/github/github-2022.ndjson.gz β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
As you can see, there are two different schemas for the same file.
We can clear the schema cache using a system query:
sql
SYSTEM DROP SCHEMA CACHE FOR S3
response
Ok.
sql
SELECT count() FROM system.schema_inference_cache WHERE storage='S3'
response
ββcount()ββ
β 0 β
βββββββββββ
Text formats {#text-formats}
For text formats, ClickHouse reads the data row by row, extracts column values according to the format,
and then uses some recursive parsers and heuristics to determine the type for each value. The maximum number of rows and bytes read from the data in schema inference
is controlled by the settings
input_format_max_rows_to_read_for_schema_inference
(25000 by default) and
input_format_max_bytes_to_read_for_schema_inference
(32Mb by default).
By default, all inferred types are
Nullable
, but you can change this by setting
schema_inference_make_columns_nullable
(see examples in the
settings
section).
JSON formats {#json-formats}
In JSON formats ClickHouse parses values according to the JSON specification and then tries to find the most appropriate data type for them.
Let's see how it works, what types can be inferred and what specific settings can be used in JSON formats.
Examples
Here and further, the
format
table function will be used in examples.
Integers, Floats, Bools, Strings:
sql
DESC format(JSONEachRow, '{"int" : 42, "float" : 42.42, "string" : "Hello, World!"}'); | {"source_file": "schema-inference.md"} | [
-0.06153968721628189,
-0.06506180018186569,
-0.08801287412643433,
-0.01635614037513733,
0.041823871433734894,
-0.03242364153265953,
-0.037950340658426285,
-0.06450770050287247,
0.03129279240965843,
0.02437153272330761,
0.05208433419466019,
-0.04766570404171944,
0.013807551003992558,
-0.037... |
5b4ff7a4-9409-48e4-b2f6-dffdeef6fd7d | Integers, Floats, Bools, Strings:
sql
DESC format(JSONEachRow, '{"int" : 42, "float" : 42.42, "string" : "Hello, World!"}');
response
ββnameββββ¬βtypeβββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β int β Nullable(Int64) β β β β β β
β float β Nullable(Float64) β β β β β β
β bool β Nullable(Bool) β β β β β β
β string β Nullable(String) β β β β β β
ββββββββββ΄ββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Dates, DateTimes:
sql
DESC format(JSONEachRow, '{"date" : "2022-01-01", "datetime" : "2022-01-01 00:00:00", "datetime64" : "2022-01-01 00:00:00.000"}')
response
ββnameββββββββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β date β Nullable(Date) β β β β β β
β datetime β Nullable(DateTime) β β β β β β
β datetime64 β Nullable(DateTime64(9)) β β β β β β
ββββββββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Arrays:
sql
DESC format(JSONEachRow, '{"arr" : [1, 2, 3], "nested_arrays" : [[1, 2, 3], [4, 5, 6], []]}')
response
ββnameβββββββββββ¬βtypeβββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β arr β Array(Nullable(Int64)) β β β β β β
β nested_arrays β Array(Array(Nullable(Int64))) β β β β β β
βββββββββββββββββ΄ββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
If an array contains
null
, ClickHouse will use types from the other array elements:
sql
DESC format(JSONEachRow, '{"arr" : [null, 42, null]}')
response
ββnameββ¬βtypeββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β arr β Array(Nullable(Int64)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
If an array contains values of different types and setting
input_format_json_infer_array_of_dynamic_from_array_of_different_types
is enabled (it is enabled by default), then it will have type
Array(Dynamic)
: | {"source_file": "schema-inference.md"} | [
0.0034110997803509235,
0.021192079409956932,
-0.007947240024805069,
0.05098642408847809,
-0.08386214822530746,
-0.024926839396357536,
-0.009033294394612312,
-0.004677135031670332,
-0.037663452327251434,
0.03390388935804367,
0.030984794721007347,
-0.08794088661670685,
-0.009052286855876446,
... |
23f303f1-30fd-408e-8f10-e2ca3e87eac7 | sql
SET input_format_json_infer_array_of_dynamic_from_array_of_different_types=1;
DESC format(JSONEachRow, '{"arr" : [42, "hello", [1, 2, 3]]}');
response
ββnameββ¬βtypeββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β arr β Array(Dynamic) β β β β β β
ββββββββ΄βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Named tuples:
When setting
input_format_json_try_infer_named_tuples_from_objects
is enabled, during schema inference ClickHouse will try to infer named Tuple from JSON objects.
The resulting named Tuple will contain all elements from all corresponding JSON objects from sample data.
sql
SET input_format_json_try_infer_named_tuples_from_objects = 1;
DESC format(JSONEachRow, '{"obj" : {"a" : 42, "b" : "Hello"}}, {"obj" : {"a" : 43, "c" : [1, 2, 3]}}, {"obj" : {"d" : {"e" : 42}}}')
response
ββnameββ¬βtypeββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β obj β Tuple(a Nullable(Int64), b Nullable(String), c Array(Nullable(Int64)), d Tuple(e Nullable(Int64))) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Unnamed Tuples:
If setting
input_format_json_infer_array_of_dynamic_from_array_of_different_types
is disabled, we treat Arrays with elements of different types as Unnamed Tuples in JSON formats.
sql
SET input_format_json_infer_array_of_dynamic_from_array_of_different_types = 0;
DESC format(JSONEachRow, '{"tuple" : [1, "Hello, World!", [1, 2, 3]]}')
response
ββnameβββ¬βtypeββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β tuple β Tuple(Nullable(Int64), Nullable(String), Array(Nullable(Int64))) β β β β β β
βββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
If some values are
null
or empty, we use types of corresponding values from the other rows:
sql
SET input_format_json_infer_array_of_dynamic_from_array_of_different_types=0;
DESC format(JSONEachRow, $$
{"tuple" : [1, null, null]}
{"tuple" : [null, "Hello, World!", []]}
{"tuple" : [null, null, [1, 2, 3]]}
$$) | {"source_file": "schema-inference.md"} | [
0.024138208478689194,
-0.07337144762277603,
0.03584074601531029,
0.0152188241481781,
-0.08913896977901459,
-0.0427277572453022,
0.006675814278423786,
-0.03753601014614105,
-0.048311997205019,
-0.03480351343750954,
-0.04758131876587868,
-0.05765248090028763,
-0.025635430589318275,
0.0195984... |
db228037-4766-458d-a960-21f2cd0d6c81 | response
ββnameβββ¬βtypeββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β tuple β Tuple(Nullable(Int64), Nullable(String), Array(Nullable(Int64))) β β β β β β
βββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Maps:
In JSON we can read objects with values of the same type as Map type.
Note: it will work only when settings
input_format_json_read_objects_as_strings
and
input_format_json_try_infer_named_tuples_from_objects
are disabled.
sql
SET input_format_json_read_objects_as_strings = 0, input_format_json_try_infer_named_tuples_from_objects = 0;
DESC format(JSONEachRow, '{"map" : {"key1" : 42, "key2" : 24, "key3" : 4}}')
response
ββnameββ¬βtypeββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β map β Map(String, Nullable(Int64)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
JSON Object type (if setting
allow_experimental_object_type
is enabled):
sql
SET allow_experimental_object_type = 1
DESC format(JSONEachRow, $$
{"obj" : {"key1" : 42}}
{"obj" : {"key2" : "Hello, World!"}}
{"obj" : {"key1" : 24, "key3" : {"a" : 42, "b" : null}}}
$$)
response
ββnameββ¬βtypeββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β obj β Object(Nullable('json')) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Nested complex types:
sql
DESC format(JSONEachRow, '{"value" : [[[42, 24], []], {"key1" : 42, "key2" : 24}]}')
response
ββnameβββ¬βtypeββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β value β Tuple(Array(Array(Nullable(String))), Tuple(key1 Nullable(Int64), key2 Nullable(Int64))) β β β β β β
βββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
If ClickHouse cannot determine the type for some key, because the data contains only nulls/empty objects/empty arrays, type
String
will be used if setting
input_format_json_infer_incomplete_types_as_strings
is enabled or an exception will be thrown otherwise: | {"source_file": "schema-inference.md"} | [
0.08594057708978653,
-0.017544452100992203,
-0.02251882664859295,
0.016078375279903412,
-0.11509060859680176,
-0.012387321330606937,
-0.02161187306046486,
0.013336868956685066,
-0.12161166965961456,
-0.004509129095822573,
-0.007867183536291122,
-0.04097023606300354,
-0.009740163572132587,
... |
587b3fb5-f639-408a-b00b-f77d68872751 | sql
DESC format(JSONEachRow, '{"arr" : [null, null]}') SETTINGS input_format_json_infer_incomplete_types_as_strings = 1;
response
ββnameββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β arr β Array(Nullable(String)) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
DESC format(JSONEachRow, '{"arr" : [null, null]}') SETTINGS input_format_json_infer_incomplete_types_as_strings = 0;
response
Code: 652. DB::Exception: Received from localhost:9000. DB::Exception:
Cannot determine type for column 'arr' by first 1 rows of data,
most likely this column contains only Nulls or empty Arrays/Maps.
...
JSON settings {#json-settings}
input_format_json_try_infer_numbers_from_strings {#input_format_json_try_infer_numbers_from_strings}
Enabling this setting allows inferring numbers from string values.
This setting is disabled by default.
Example:
sql
SET input_format_json_try_infer_numbers_from_strings = 1;
DESC format(JSONEachRow, $$
{"value" : "42"}
{"value" : "424242424242"}
$$)
response
ββnameβββ¬βtypeβββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β value β Nullable(Int64) β β β β β β
βββββββββ΄ββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
input_format_json_try_infer_named_tuples_from_objects {#input_format_json_try_infer_named_tuples_from_objects}
Enabling this setting allows inferring named Tuples from JSON objects. The resulting named Tuple will contain all elements from all corresponding JSON objects from sample data.
It can be useful when JSON data is not sparse so the sample of data will contain all possible object keys.
This setting is enabled by default.
Example
sql
SET input_format_json_try_infer_named_tuples_from_objects = 1;
DESC format(JSONEachRow, '{"obj" : {"a" : 42, "b" : "Hello"}}, {"obj" : {"a" : 43, "c" : [1, 2, 3]}}, {"obj" : {"d" : {"e" : 42}}}')
Result:
response
ββnameββ¬βtypeββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β obj β Tuple(a Nullable(Int64), b Nullable(String), c Array(Nullable(Int64)), d Tuple(e Nullable(Int64))) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ | {"source_file": "schema-inference.md"} | [
0.0244505126029253,
-0.055701933801174164,
-0.04515637829899788,
0.014496961608529091,
-0.09101516753435135,
-0.005514440126717091,
-0.03168190270662308,
-0.0029801258351653814,
-0.06658028811216354,
-0.01762433722615242,
-0.0009209311101585627,
-0.007782352156937122,
-0.028706854209303856,
... |
8cffb954-ba27-4b93-8c1e-1cd6a16b4f77 | sql
SET input_format_json_try_infer_named_tuples_from_objects = 1;
DESC format(JSONEachRow, '{"array" : [{"a" : 42, "b" : "Hello"}, {}, {"c" : [1,2,3]}, {"d" : "2020-01-01"}]}')
Result:
markdown
ββnameβββ¬βtypeβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β array β Array(Tuple(a Nullable(Int64), b Nullable(String), c Array(Nullable(Int64)), d Nullable(Date))) β β β β β β
βββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects {#input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects}
Enabling this setting allows to use String type for ambiguous paths during named tuples inference from JSON objects (when
input_format_json_try_infer_named_tuples_from_objects
is enabled) instead of an exception.
It allows to read JSON objects as named Tuples even if there are ambiguous paths.
Disabled by default.
Examples
With disabled setting:
sql
SET input_format_json_try_infer_named_tuples_from_objects = 1;
SET input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects = 0;
DESC format(JSONEachRow, '{"obj" : {"a" : 42}}, {"obj" : {"a" : {"b" : "Hello"}}}');
Result:
response
Code: 636. DB::Exception: The table structure cannot be extracted from a JSONEachRow format file. Error:
Code: 117. DB::Exception: JSON objects have ambiguous data: in some objects path 'a' has type 'Int64' and in some - 'Tuple(b String)'. You can enable setting input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects to use String type for path 'a'. (INCORRECT_DATA) (version 24.3.1.1).
You can specify the structure manually. (CANNOT_EXTRACT_TABLE_STRUCTURE)
With enabled setting:
sql
SET input_format_json_try_infer_named_tuples_from_objects = 1;
SET input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects = 1;
DESC format(JSONEachRow, '{"obj" : "a" : 42}, {"obj" : {"a" : {"b" : "Hello"}}}');
SELECT * FROM format(JSONEachRow, '{"obj" : {"a" : 42}}, {"obj" : {"a" : {"b" : "Hello"}}}');
Result:
response
ββnameββ¬βtypeβββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β obj β Tuple(a Nullable(String)) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
ββobjββββββββββββββββββ
β ('42') β
β ('{"b" : "Hello"}') β
βββββββββββββββββββββββ | {"source_file": "schema-inference.md"} | [
-0.0032027640845626593,
-0.006562582682818174,
0.004353910684585571,
0.05488276109099388,
-0.04867352545261383,
0.009183238260447979,
-0.011086378246545792,
-0.0017099197721108794,
-0.08203232288360596,
-0.027752729132771492,
-0.02034888230264187,
-0.027089495211839676,
-0.001990654738619923... |
d5021ac9-94a0-4c79-ad2d-a6144147a8e3 | input_format_json_read_objects_as_strings {#input_format_json_read_objects_as_strings}
Enabling this setting allows reading nested JSON objects as strings.
This setting can be used to read nested JSON objects without using JSON object type.
This setting is enabled by default.
Note: enabling this setting will take effect only if setting
input_format_json_try_infer_named_tuples_from_objects
is disabled.
sql
SET input_format_json_read_objects_as_strings = 1, input_format_json_try_infer_named_tuples_from_objects = 0;
DESC format(JSONEachRow, $$
{"obj" : {"key1" : 42, "key2" : [1,2,3,4]}}
{"obj" : {"key3" : {"nested_key" : 1}}}
$$)
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β obj β Nullable(String) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
input_format_json_read_numbers_as_strings {#input_format_json_read_numbers_as_strings}
Enabling this setting allows reading numeric values as strings.
This setting is enabled by default.
Example
sql
SET input_format_json_read_numbers_as_strings = 1;
DESC format(JSONEachRow, $$
{"value" : 1055}
{"value" : "unknown"}
$$)
response
ββnameβββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β value β Nullable(String) β β β β β β
βββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
input_format_json_read_bools_as_numbers {#input_format_json_read_bools_as_numbers}
Enabling this setting allows reading Bool values as numbers.
This setting is enabled by default.
Example:
sql
SET input_format_json_read_bools_as_numbers = 1;
DESC format(JSONEachRow, $$
{"value" : true}
{"value" : 42}
$$)
response
ββnameβββ¬βtypeβββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β value β Nullable(Int64) β β β β β β
βββββββββ΄ββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
input_format_json_read_bools_as_strings {#input_format_json_read_bools_as_strings}
Enabling this setting allows reading Bool values as strings.
This setting is enabled by default.
Example:
sql
SET input_format_json_read_bools_as_strings = 1;
DESC format(JSONEachRow, $$
{"value" : true}
{"value" : "Hello, World"}
$$) | {"source_file": "schema-inference.md"} | [
0.018091510981321335,
-0.04566822573542595,
-0.01052791252732277,
0.03662095591425896,
-0.10176657140254974,
-0.010707983747124672,
-0.05240735039114952,
0.028728559613227844,
-0.02931581810116768,
-0.025732330977916718,
-0.03070477582514286,
0.017343437299132347,
-0.014672597870230675,
0.... |
2af44c33-e905-4807-8584-4f9eea3ab9c3 | response
ββnameβββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β value β Nullable(String) β β β β β β
βββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
input_format_json_read_arrays_as_strings {#input_format_json_read_arrays_as_strings}
Enabling this setting allows reading JSON array values as strings.
This setting is enabled by default.
Example
sql
SET input_format_json_read_arrays_as_strings = 1;
SELECT arr, toTypeName(arr), JSONExtractArrayRaw(arr)[3] from format(JSONEachRow, 'arr String', '{"arr" : [1, "Hello", [1,2,3]]}');
response
ββarrββββββββββββββββββββ¬βtoTypeName(arr)ββ¬βarrayElement(JSONExtractArrayRaw(arr), 3)ββ
β [1, "Hello", [1,2,3]] β String β [1,2,3] β
βββββββββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββ
input_format_json_infer_incomplete_types_as_strings {#input_format_json_infer_incomplete_types_as_strings}
Enabling this setting allows to use String type for JSON keys that contain only
Null
/
{}
/
[]
in data sample during schema inference.
In JSON formats any value can be read as String if all corresponding settings are enabled (they are all enabled by default), and we can avoid errors like
Cannot determine type for column 'column_name' by first 25000 rows of data, most likely this column contains only Nulls or empty Arrays/Maps
during schema inference
by using String type for keys with unknown types.
Example:
sql
SET input_format_json_infer_incomplete_types_as_strings = 1, input_format_json_try_infer_named_tuples_from_objects = 1;
DESCRIBE format(JSONEachRow, '{"obj" : {"a" : [1,2,3], "b" : "hello", "c" : null, "d" : {}, "e" : []}}');
SELECT * FROM format(JSONEachRow, '{"obj" : {"a" : [1,2,3], "b" : "hello", "c" : null, "d" : {}, "e" : []}}');
Result:
```markdown
ββnameββ¬βtypeββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β obj β Tuple(a Array(Nullable(Int64)), b Nullable(String), c Nullable(String), d Nullable(String), e Array(Nullable(String))) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
ββobjβββββββββββββββββββββββββββββ
β ([1,2,3],'hello',NULL,'{}',[]) β
ββββββββββββββββββββββββββββββββββ
```
CSV {#csv} | {"source_file": "schema-inference.md"} | [
0.02966507151722908,
-0.017068393528461456,
-0.013319474644958973,
0.01606600731611252,
-0.12031001597642899,
-0.0188992191106081,
0.014819079078733921,
-0.005469626281410456,
-0.08279857784509659,
-0.05768563970923424,
-0.03504369780421257,
0.019089356064796448,
-0.007537905126810074,
0.0... |
8eb639df-ff49-443d-a01d-c470ed2f4748 | ββobjβββββββββββββββββββββββββββββ
β ([1,2,3],'hello',NULL,'{}',[]) β
ββββββββββββββββββββββββββββββββββ
```
CSV {#csv}
In CSV format ClickHouse extracts column values from the row according to delimiters. ClickHouse expects all types except numbers and strings to be enclosed in double quotes. If the value is in double quotes, ClickHouse tries to parse
the data inside quotes using the recursive parser and then tries to find the most appropriate data type for it. If the value is not in double quotes, ClickHouse tries to parse it as a number,
and if the value is not a number, ClickHouse treats it as a string.
If you don't want ClickHouse to try to determine complex types using some parsers and heuristics, you can disable setting
input_format_csv_use_best_effort_in_schema_inference
and ClickHouse will treat all columns as Strings.
If setting
input_format_csv_detect_header
is enabled, ClickHouse will try to detect the header with column names (and maybe types) while inferring schema. This setting is enabled by default.
Examples:
Integers, Floats, Bools, Strings:
sql
DESC format(CSV, '42,42.42,true,"Hello,World!"')
response
ββnameββ¬βtypeβββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(Int64) β β β β β β
β c2 β Nullable(Float64) β β β β β β
β c3 β Nullable(Bool) β β β β β β
β c4 β Nullable(String) β β β β β β
ββββββββ΄ββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Strings without quotes:
sql
DESC format(CSV, 'Hello world!,World hello!')
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(String) β β β β β β
β c2 β Nullable(String) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Dates, DateTimes:
sql
DESC format(CSV, '"2020-01-01","2020-01-01 00:00:00","2022-01-01 00:00:00.000"') | {"source_file": "schema-inference.md"} | [
-0.012531799264252186,
-0.003938000649213791,
-0.015676112845540047,
0.03487450256943703,
0.001366169424727559,
-0.09036938846111298,
0.03404927998781204,
-0.04907156527042389,
0.002423018217086792,
0.00555054796859622,
0.030080776661634445,
-0.0858047604560852,
0.03663400560617447,
-0.027... |
c95e8711-826b-4a19-b89e-5592c4bf612c | Dates, DateTimes:
sql
DESC format(CSV, '"2020-01-01","2020-01-01 00:00:00","2022-01-01 00:00:00.000"')
response
ββnameββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(Date) β β β β β β
β c2 β Nullable(DateTime) β β β β β β
β c3 β Nullable(DateTime64(9)) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Arrays:
sql
DESC format(CSV, '"[1,2,3]","[[1, 2], [], [3, 4]]"')
response
ββnameββ¬βtypeβββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Array(Nullable(Int64)) β β β β β β
β c2 β Array(Array(Nullable(Int64))) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
DESC format(CSV, $$"['Hello', 'world']","[['Abc', 'Def'], []]"$$)
response
ββnameββ¬βtypeββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Array(Nullable(String)) β β β β β β
β c2 β Array(Array(Nullable(String))) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
If an array contains null, ClickHouse will use types from the other array elements:
sql
DESC format(CSV, '"[NULL, 42, NULL]"')
response
ββnameββ¬βtypeββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Array(Nullable(Int64)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Maps:
sql
DESC format(CSV, $$"{'key1' : 42, 'key2' : 24}"$$)
response
ββnameββ¬βtypeββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Map(String, Nullable(Int64)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Nested Arrays and Maps:
sql
DESC format(CSV, $$"[{'key1' : [[42, 42], []], 'key2' : [[null], [42]]}]"$$) | {"source_file": "schema-inference.md"} | [
0.007776972837746143,
0.015219506807625294,
-0.017984984442591667,
0.047521770000457764,
-0.03921793773770332,
-0.04145617410540581,
0.013963326811790466,
-0.031277548521757126,
-0.04135395586490631,
0.03738127276301384,
0.0072943903505802155,
-0.06128469109535217,
-0.0031567856203764677,
... |
58e46996-3247-49cd-a99f-b9302d6f7a5d | Nested Arrays and Maps:
sql
DESC format(CSV, $$"[{'key1' : [[42, 42], []], 'key2' : [[null], [42]]}]"$$)
response
ββnameββ¬βtypeβββββββββββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Array(Map(String, Array(Array(Nullable(Int64))))) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
If ClickHouse cannot determine the type inside quotes, because the data contains only nulls, ClickHouse will treat it as String:
sql
DESC format(CSV, '"[NULL, NULL]"')
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(String) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Example with disabled setting
input_format_csv_use_best_effort_in_schema_inference
:
sql
SET input_format_csv_use_best_effort_in_schema_inference = 0
DESC format(CSV, '"[1,2,3]",42.42,Hello World!')
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(String) β β β β β β
β c2 β Nullable(String) β β β β β β
β c3 β Nullable(String) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Examples of header auto-detection (when
input_format_csv_detect_header
is enabled):
Only names:
sql
SELECT * FROM format(CSV,
$$"number","string","array"
42,"Hello","[1, 2, 3]"
43,"World","[4, 5, 6]"
$$)
response
ββnumberββ¬βstringββ¬βarrayββββ
β 42 β Hello β [1,2,3] β
β 43 β World β [4,5,6] β
ββββββββββ΄βββββββββ΄ββββββββββ
Names and types:
sql
DESC format(CSV,
$$"number","string","array"
"UInt32","String","Array(UInt16)"
42,"Hello","[1, 2, 3]"
43,"World","[4, 5, 6]"
$$)
response
ββnameββββ¬βtypeβββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β number β UInt32 β β β β β β
β string β String β β β β β β
β array β Array(UInt16) β β β β β β
ββββββββββ΄ββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ | {"source_file": "schema-inference.md"} | [
0.04558466747403145,
0.015500770881772041,
-0.037740208208560944,
0.038002900779247284,
-0.07206028699874878,
-0.03281731158494949,
0.025207430124282837,
-0.027662580832839012,
-0.06534501165151596,
0.042619653046131134,
-0.006045233923941851,
-0.07814480364322662,
0.04586625471711159,
-0.... |
53a6cf76-ff32-4fa3-8acc-7e0f2610459d | Note that the header can be detected only if there is at least one column with a non-String type. If all columns have String type, the header is not detected:
sql
SELECT * FROM format(CSV,
$$"first_column","second_column"
"Hello","World"
"World","Hello"
$$)
response
ββc1ββββββββββββ¬βc2βββββββββββββ
β first_column β second_column β
β Hello β World β
β World β Hello β
ββββββββββββββββ΄ββββββββββββββββ
CSV settings {#csv-settings}
input_format_csv_try_infer_numbers_from_strings {#input_format_csv_try_infer_numbers_from_strings}
Enabling this setting allows inferring numbers from string values.
This setting is disabled by default.
Example:
sql
SET input_format_json_try_infer_numbers_from_strings = 1;
DESC format(CSV, '42,42.42');
response
ββnameββ¬βtypeβββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(Int64) β β β β β β
β c2 β Nullable(Float64) β β β β β β
ββββββββ΄ββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
TSV/TSKV {#tsv-tskv}
In TSV/TSKV formats ClickHouse extracts column value from the row according to tabular delimiters and then parses extracted value using
the recursive parser to determine the most appropriate type. If the type cannot be determined, ClickHouse treats this value as String.
If you don't want ClickHouse to try to determine complex types using some parsers and heuristics, you can disable setting
input_format_tsv_use_best_effort_in_schema_inference
and ClickHouse will treat all columns as Strings.
If setting
input_format_tsv_detect_header
is enabled, ClickHouse will try to detect the header with column names (and maybe types) while inferring schema. This setting is enabled by default.
Examples:
Integers, Floats, Bools, Strings:
sql
DESC format(TSV, '42 42.42 true Hello,World!')
response
ββnameββ¬βtypeβββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(Int64) β β β β β β
β c2 β Nullable(Float64) β β β β β β
β c3 β Nullable(Bool) β β β β β β
β c4 β Nullable(String) β β β β β β
ββββββββ΄ββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
DESC format(TSKV, 'int=42 float=42.42 bool=true string=Hello,World!\n') | {"source_file": "schema-inference.md"} | [
0.0007835670257918537,
-0.00179701775778085,
-0.0656420961022377,
0.006767980754375458,
0.0015615355223417282,
0.025264021009206772,
0.030481470748782158,
0.03695424646139145,
0.015237930230796337,
-0.0002688273962121457,
-0.04929395765066147,
-0.11493328213691711,
0.03389628231525421,
-0.... |
5ae44299-ffbd-46e1-8318-ab40bfb35c49 | sql
DESC format(TSKV, 'int=42 float=42.42 bool=true string=Hello,World!\n')
response
ββnameββββ¬βtypeβββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β int β Nullable(Int64) β β β β β β
β float β Nullable(Float64) β β β β β β
β bool β Nullable(Bool) β β β β β β
β string β Nullable(String) β β β β β β
ββββββββββ΄ββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Dates, DateTimes:
sql
DESC format(TSV, '2020-01-01 2020-01-01 00:00:00 2022-01-01 00:00:00.000')
response
ββnameββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(Date) β β β β β β
β c2 β Nullable(DateTime) β β β β β β
β c3 β Nullable(DateTime64(9)) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Arrays:
sql
DESC format(TSV, '[1,2,3] [[1, 2], [], [3, 4]]')
response
ββnameββ¬βtypeβββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Array(Nullable(Int64)) β β β β β β
β c2 β Array(Array(Nullable(Int64))) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
DESC format(TSV, '[''Hello'', ''world''] [[''Abc'', ''Def''], []]')
response
ββnameββ¬βtypeββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Array(Nullable(String)) β β β β β β
β c2 β Array(Array(Nullable(String))) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
If an array contains null, ClickHouse will use types from the other array elements:
sql
DESC format(TSV, '[NULL, 42, NULL]') | {"source_file": "schema-inference.md"} | [
0.007294256240129471,
-0.03167608380317688,
0.0159250870347023,
0.047880977392196655,
-0.03980817273259163,
-0.03637383133172989,
-0.00579888466745615,
-0.023369060829281807,
-0.057935480028390884,
0.07235482335090637,
-0.012433103285729885,
-0.07018422335386276,
-0.055449228733778,
-0.006... |
874b0498-75b0-4a86-a085-56b14b3fb244 | If an array contains null, ClickHouse will use types from the other array elements:
sql
DESC format(TSV, '[NULL, 42, NULL]')
response
ββnameββ¬βtypeββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Array(Nullable(Int64)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Tuples:
sql
DESC format(TSV, $$(42, 'Hello, world!')$$)
response
ββnameββ¬βtypeββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Tuple(Nullable(Int64), Nullable(String)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Maps:
sql
DESC format(TSV, $${'key1' : 42, 'key2' : 24}$$)
response
ββnameββ¬βtypeββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Map(String, Nullable(Int64)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Nested Arrays, Tuples and Maps:
sql
DESC format(TSV, $$[{'key1' : [(42, 'Hello'), (24, NULL)], 'key2' : [(NULL, ','), (42, 'world!')]}]$$)
response
ββnameββ¬βtypeβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Array(Map(String, Array(Tuple(Nullable(Int64), Nullable(String))))) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
If ClickHouse cannot determine the type, because the data contains only nulls, ClickHouse will treat it as String:
sql
DESC format(TSV, '[NULL, NULL]')
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(String) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Example with disabled setting
input_format_tsv_use_best_effort_in_schema_inference
:
sql
SET input_format_tsv_use_best_effort_in_schema_inference = 0
DESC format(TSV, '[1,2,3] 42.42 Hello World!') | {"source_file": "schema-inference.md"} | [
0.058522455394268036,
-0.035643164068460464,
0.00650758808478713,
0.029460430145263672,
-0.048428427428007126,
-0.04734797775745392,
0.03888604789972305,
-0.10225332528352737,
-0.08630379289388657,
0.00994621030986309,
0.022559108212590218,
-0.03188251331448555,
0.016886183992028236,
-0.04... |
d2019af6-fc74-4345-aaa4-d1a46b859c2d | sql
SET input_format_tsv_use_best_effort_in_schema_inference = 0
DESC format(TSV, '[1,2,3] 42.42 Hello World!')
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(String) β β β β β β
β c2 β Nullable(String) β β β β β β
β c3 β Nullable(String) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Examples of header auto-detection (when
input_format_tsv_detect_header
is enabled):
Only names:
sql
SELECT * FROM format(TSV,
$$number string array
42 Hello [1, 2, 3]
43 World [4, 5, 6]
$$);
response
ββnumberββ¬βstringββ¬βarrayββββ
β 42 β Hello β [1,2,3] β
β 43 β World β [4,5,6] β
ββββββββββ΄βββββββββ΄ββββββββββ
Names and types:
sql
DESC format(TSV,
$$number string array
UInt32 String Array(UInt16)
42 Hello [1, 2, 3]
43 World [4, 5, 6]
$$)
response
ββnameββββ¬βtypeβββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β number β UInt32 β β β β β β
β string β String β β β β β β
β array β Array(UInt16) β β β β β β
ββββββββββ΄ββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Note that the header can be detected only if there is at least one column with a non-String type. If all columns have String type, the header is not detected:
sql
SELECT * FROM format(TSV,
$$first_column second_column
Hello World
World Hello
$$)
response
ββc1ββββββββββββ¬βc2βββββββββββββ
β first_column β second_column β
β Hello β World β
β World β Hello β
ββββββββββββββββ΄ββββββββββββββββ
Values {#values}
In Values format ClickHouse extracts column value from the row and then parses it using
the recursive parser similar to how literals are parsed.
Examples:
Integers, Floats, Bools, Strings:
sql
DESC format(Values, $$(42, 42.42, true, 'Hello,World!')$$) | {"source_file": "schema-inference.md"} | [
0.0052415248937904835,
-0.004115255083888769,
-0.009327853098511696,
-0.005024902522563934,
-0.008945940062403679,
-0.01945500820875168,
0.06571736931800842,
-0.02550346590578556,
-0.08566445112228394,
0.0788615420460701,
-0.08282066136598587,
-0.10616882145404816,
-0.011751214973628521,
0... |
dcbbf438-721a-4924-9d96-992be13d73dc | Examples:
Integers, Floats, Bools, Strings:
sql
DESC format(Values, $$(42, 42.42, true, 'Hello,World!')$$)
response
ββnameββ¬βtypeβββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(Int64) β β β β β β
β c2 β Nullable(Float64) β β β β β β
β c3 β Nullable(Bool) β β β β β β
β c4 β Nullable(String) β β β β β β
ββββββββ΄ββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Dates, DateTimes:
sql
DESC format(Values, $$('2020-01-01', '2020-01-01 00:00:00', '2022-01-01 00:00:00.000')$$)
response
ββnameββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(Date) β β β β β β
β c2 β Nullable(DateTime) β β β β β β
β c3 β Nullable(DateTime64(9)) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Arrays:
sql
DESC format(Values, '([1,2,3], [[1, 2], [], [3, 4]])')
response
ββnameββ¬βtypeβββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Array(Nullable(Int64)) β β β β β β
β c2 β Array(Array(Nullable(Int64))) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
If an array contains null, ClickHouse will use types from the other array elements:
sql
DESC format(Values, '([NULL, 42, NULL])')
response
ββnameββ¬βtypeββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Array(Nullable(Int64)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Tuples:
sql
DESC format(Values, $$((42, 'Hello, world!'))$$) | {"source_file": "schema-inference.md"} | [
0.020181061699986458,
-0.007967466488480568,
-0.03032749332487583,
0.04893463850021362,
-0.050493914633989334,
-0.0324956513941288,
0.007699939887970686,
-0.05130654573440552,
-0.07543301582336426,
0.05613919347524643,
0.023179303854703903,
-0.06529317051172256,
-0.007321689743548632,
-0.0... |
f668593c-e0d0-4cea-bd0c-ce11806af904 | Tuples:
sql
DESC format(Values, $$((42, 'Hello, world!'))$$)
response
ββnameββ¬βtypeββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Tuple(Nullable(Int64), Nullable(String)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Maps:
sql
DESC format(Values, $$({'key1' : 42, 'key2' : 24})$$)
response
ββnameββ¬βtypeββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Map(String, Nullable(Int64)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Nested Arrays, Tuples and Maps:
sql
DESC format(Values, $$([{'key1' : [(42, 'Hello'), (24, NULL)], 'key2' : [(NULL, ','), (42, 'world!')]}])$$)
response
ββnameββ¬βtypeβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Array(Map(String, Array(Tuple(Nullable(Int64), Nullable(String))))) β β β β β β
ββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
If ClickHouse cannot determine the type, because the data contains only nulls, an exception will be thrown:
sql
DESC format(Values, '([NULL, NULL])')
response
Code: 652. DB::Exception: Received from localhost:9000. DB::Exception:
Cannot determine type for column 'c1' by first 1 rows of data,
most likely this column contains only Nulls or empty Arrays/Maps.
...
Example with disabled setting
input_format_tsv_use_best_effort_in_schema_inference
:
sql
SET input_format_tsv_use_best_effort_in_schema_inference = 0
DESC format(TSV, '[1,2,3] 42.42 Hello World!')
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(String) β β β β β β
β c2 β Nullable(String) β β β β β β
β c3 β Nullable(String) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
CustomSeparated {#custom-separated}
In CustomSeparated format ClickHouse first extracts all column values from the row according to specified delimiters and then tries to infer
the data type for each value according to escaping rule. | {"source_file": "schema-inference.md"} | [
0.035372886806726456,
-0.004050811752676964,
0.018151411786675453,
0.010854140855371952,
-0.044747885316610336,
-0.04988297075033188,
0.06555473059415817,
-0.09230732172727585,
-0.0669645294547081,
0.002248130738735199,
0.06755753606557846,
-0.0606241449713707,
0.03306649252772331,
-0.0240... |
bff5b1a4-e516-4892-b113-a5878ee0dd19 | In CustomSeparated format ClickHouse first extracts all column values from the row according to specified delimiters and then tries to infer
the data type for each value according to escaping rule.
If setting
input_format_custom_detect_header
is enabled, ClickHouse will try to detect the header with column names (and maybe types) while inferring schema. This setting is enabled by default.
Example
```sql
SET format_custom_row_before_delimiter = '
',
format_custom_row_after_delimiter = '
\n',
format_custom_row_between_delimiter = '
\n',
format_custom_result_before_delimiter = '
\n',
format_custom_result_after_delimiter = '
\n',
format_custom_field_delimiter = '
',
format_custom_escaping_rule = 'Quoted'
DESC format(CustomSeparated, $$
42.42
'Some string 1'
[1, NULL, 3]
NULL
'Some string 3'
[1, 2, NULL]
$$)
response
ββnameββ¬βtypeββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(Float64) β β β β β β
β c2 β Nullable(String) β β β β β β
β c3 β Array(Nullable(Int64)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
```
Example of header auto-detection (when
input_format_custom_detect_header
is enabled):
```sql
SET format_custom_row_before_delimiter = '
',
format_custom_row_after_delimiter = '
\n',
format_custom_row_between_delimiter = '
\n',
format_custom_result_before_delimiter = '
\n',
format_custom_result_after_delimiter = '
\n',
format_custom_field_delimiter = '
',
format_custom_escaping_rule = 'Quoted'
DESC format(CustomSeparated, $$
'number'
'string'
'array'
42.42
'Some string 1'
[1, NULL, 3]
NULL
'Some string 3'
[1, 2, NULL]
$$)
```
response
ββnumberββ¬βstringβββββββββ¬βarrayβββββββ
β 42.42 β Some string 1 β [1,NULL,3] β
β α΄Ία΅α΄Έα΄Έ β Some string 3 β [1,2,NULL] β
ββββββββββ΄ββββββββββββββββ΄βββββββββββββ
Template {#template}
In Template format ClickHouse first extracts all column values from the row according to the specified template and then tries to infer the
data type for each value according to its escaping rule.
Example
Let's say we have a file
resultset
with the next content:
bash
<result_before_delimiter>
${data}<result_after_delimiter>
And a file
row_format
with the next content:
text
<row_before_delimiter>${column_1:CSV}<field_delimiter_1>${column_2:Quoted}<field_delimiter_2>${column_3:JSON}<row_after_delimiter>
Then we can make the next queries:
```sql
SET format_template_rows_between_delimiter = '
\n',
format_template_row = 'row_format',
format_template_resultset = 'resultset_format' | {"source_file": "schema-inference.md"} | [
0.044985201209783554,
0.011450941674411297,
-0.02153731882572174,
0.011415784247219563,
-0.036612559109926224,
0.0092249047011137,
-0.007721624802798033,
0.00834309309720993,
-0.07322932779788971,
0.04031945392489433,
0.008086863905191422,
-0.12686336040496826,
0.03184332698583603,
-0.0896... |
e4dfce18-b39b-4dbc-a465-73ed7a1199dd | Then we can make the next queries:
```sql
SET format_template_rows_between_delimiter = '
\n',
format_template_row = 'row_format',
format_template_resultset = 'resultset_format'
DESC format(Template, $$
42.42
'Some string 1'
[1, null, 2]
\N
'Some string 3'
[1, 2, null]
$$)
response
ββnameββββββ¬βtypeββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β column_1 β Nullable(Float64) β β β β β β
β column_2 β Nullable(String) β β β β β β
β column_3 β Array(Nullable(Int64)) β β β β β β
ββββββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
```
Regexp {#regexp}
Similar to Template, in Regexp format ClickHouse first extracts all column values from the row according to specified regular expression and then tries to infer
data type for each value according to the specified escaping rule.
Example
```sql
SET format_regexp = '^Line: value_1=(.+?), value_2=(.+?), value_3=(.+?)',
format_regexp_escaping_rule = 'CSV'
DESC format(Regexp, $$Line: value_1=42, value_2="Some string 1", value_3="[1, NULL, 3]"
Line: value_1=2, value_2="Some string 2", value_3="[4, 5, NULL]"$$)
response
ββnameββ¬βtypeββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(Int64) β β β β β β
β c2 β Nullable(String) β β β β β β
β c3 β Array(Nullable(Int64)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
```
Settings for text formats {#settings-for-text-formats}
input_format_max_rows_to_read_for_schema_inference/input_format_max_bytes_to_read_for_schema_inference {#input-format-max-rows-to-read-for-schema-inference}
These settings control the amount of data to be read while schema inference.
The more rows/bytes are read, the more time is spent on schema inference, but the greater the chance to
correctly determine the types (especially when the data contains a lot of nulls).
Default values:
-
25000
for
input_format_max_rows_to_read_for_schema_inference
.
-
33554432
(32 Mb) for
input_format_max_bytes_to_read_for_schema_inference
.
column_names_for_schema_inference {#column-names-for-schema-inference}
The list of column names to use in schema inference for formats without explicit column names. Specified names will be used instead of default
c1,c2,c3,...
. The format:
column1,column2,column3,...
.
Example | {"source_file": "schema-inference.md"} | [
-0.0035136998631060123,
0.030487295240163803,
-0.03138761967420578,
0.03692304342985153,
-0.06360810995101929,
0.03303692862391472,
0.060551099479198456,
0.020358765497803688,
-0.09079501032829285,
0.020752236247062683,
-0.057424504309892654,
-0.047372061759233475,
0.005217520520091057,
-0... |
b1e84aa0-34ad-4581-8d42-6a688f7b568e | Example
sql
DESC format(TSV, 'Hello, World! 42 [1, 2, 3]') settings column_names_for_schema_inference = 'str,int,arr'
response
ββnameββ¬βtypeββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β str β Nullable(String) β β β β β β
β int β Nullable(Int64) β β β β β β
β arr β Array(Nullable(Int64)) β β β β β β
ββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
schema_inference_hints {#schema-inference-hints}
The list of column names and types to use in schema inference instead of automatically determined types. The format: 'column_name1 column_type1, column_name2 column_type2, ...'.
This setting can be used to specify the types of columns that could not be determined automatically or for optimizing the schema.
Example
sql
DESC format(JSONEachRow, '{"id" : 1, "age" : 25, "name" : "Josh", "status" : null, "hobbies" : ["football", "cooking"]}') SETTINGS schema_inference_hints = 'age LowCardinality(UInt8), status Nullable(String)', allow_suspicious_low_cardinality_types=1
response
ββnameβββββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β id β Nullable(Int64) β β β β β β
β age β LowCardinality(UInt8) β β β β β β
β name β Nullable(String) β β β β β β
β status β Nullable(String) β β β β β β
β hobbies β Array(Nullable(String)) β β β β β β
βββββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
schema_inference_make_columns_nullable ${#schema-inference-make-columns-nullable}
Controls making inferred types
Nullable
in schema inference for formats without information about nullability. Possible values:
* 0 - the inferred type will never be
Nullable
,
* 1 - all inferred types will be
Nullable
,
* 2 or 'auto' - for text formats, the inferred type will be
Nullable
only if the column contains
NULL
in a sample that is parsed during schema inference; for strongly-typed formats (Parquet, ORC, Arrow), nullability information is taken from file metadata,
* 3 - for text formats, use
Nullable
; for strongly-typed formats, use file metadata.
Default: 3.
Examples | {"source_file": "schema-inference.md"} | [
0.007752254605293274,
-0.04619131237268448,
-0.040480952709913254,
0.006197284907102585,
-0.033657148480415344,
-0.0077093071304261684,
-0.057898979634046555,
0.007396360859274864,
-0.08446797728538513,
0.025954410433769226,
0.0020860948134213686,
-0.04629091918468475,
-0.04539932683110237,
... |
10ee600f-259f-4e37-b39c-ed729981511d | Default: 3.
Examples
sql
SET schema_inference_make_columns_nullable = 1;
DESC format(JSONEachRow, $$
{"id" : 1, "age" : 25, "name" : "Josh", "status" : null, "hobbies" : ["football", "cooking"]}
{"id" : 2, "age" : 19, "name" : "Alan", "status" : "married", "hobbies" : ["tennis", "art"]}
$$)
response
ββnameβββββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β id β Nullable(Int64) β β β β β β
β age β Nullable(Int64) β β β β β β
β name β Nullable(String) β β β β β β
β status β Nullable(String) β β β β β β
β hobbies β Array(Nullable(String)) β β β β β β
βββββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
SET schema_inference_make_columns_nullable = 'auto';
DESC format(JSONEachRow, $$
{"id" : 1, "age" : 25, "name" : "Josh", "status" : null, "hobbies" : ["football", "cooking"]}
{"id" : 2, "age" : 19, "name" : "Alan", "status" : "married", "hobbies" : ["tennis", "art"]}
$$)
response
ββnameβββββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β id β Int64 β β β β β β
β age β Int64 β β β β β β
β name β String β β β β β β
β status β Nullable(String) β β β β β β
β hobbies β Array(String) β β β β β β
βββββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
SET schema_inference_make_columns_nullable = 0;
DESC format(JSONEachRow, $$
{"id" : 1, "age" : 25, "name" : "Josh", "status" : null, "hobbies" : ["football", "cooking"]}
{"id" : 2, "age" : 19, "name" : "Alan", "status" : "married", "hobbies" : ["tennis", "art"]}
$$)
```response | {"source_file": "schema-inference.md"} | [
0.015576496720314026,
0.01848425343632698,
-0.05073520913720131,
0.01632176712155342,
-0.04074619710445404,
0.0227696280926466,
-0.029427804052829742,
0.0007439368055202067,
-0.04763834550976753,
0.040925413370132446,
0.07032201439142227,
-0.057052746415138245,
-0.005173363257199526,
-0.00... |
4381bfee-4881-4e22-b4ef-ce27b13ad35b | ```response
ββnameβββββ¬βtypeβββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β id β Int64 β β β β β β
β age β Int64 β β β β β β
β name β String β β β β β β
β status β String β β β β β β
β hobbies β Array(String) β β β β β β
βββββββββββ΄ββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
```
input_format_try_infer_integers {#input-format-try-infer-integers}
:::note
This setting does not apply to the
JSON
data type.
:::
If enabled, ClickHouse will try to infer integers instead of floats in schema inference for text formats.
If all numbers in the column from sample data are integers, the result type will be
Int64
, if at least one number is float, the result type will be
Float64
.
If the sample data contains only integers and at least one integer is positive and overflows
Int64
, ClickHouse will infer
UInt64
.
Enabled by default.
Examples
sql
SET input_format_try_infer_integers = 0
DESC format(JSONEachRow, $$
{"number" : 1}
{"number" : 2}
$$)
response
ββnameββββ¬βtypeβββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β number β Nullable(Float64) β β β β β β
ββββββββββ΄ββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
SET input_format_try_infer_integers = 1
DESC format(JSONEachRow, $$
{"number" : 1}
{"number" : 2}
$$)
response
ββnameββββ¬βtypeβββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β number β Nullable(Int64) β β β β β β
ββββββββββ΄ββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
DESC format(JSONEachRow, $$
{"number" : 1}
{"number" : 18446744073709551615}
$$)
response
ββnameββββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β number β Nullable(UInt64) β β β β β β
ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ | {"source_file": "schema-inference.md"} | [
0.010957150720059872,
0.00836003478616476,
-0.040569230914115906,
0.004624322522431612,
-0.01962481066584587,
0.003428541822358966,
0.05289926007390022,
-0.0163153987377882,
-0.07524555176496506,
-0.002555695828050375,
0.01643524132668972,
-0.062294699251651764,
0.01798143796622753,
0.0383... |
2e477de4-353f-452c-a9b9-2d43b14f216e | sql
DESC format(JSONEachRow, $$
{"number" : 1}
{"number" : 2.2}
$$)
response
ββnameββββ¬βtypeβββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β number β Nullable(Float64) β β β β β β
ββββββββββ΄ββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
input_format_try_infer_datetimes {#input-format-try-infer-datetimes}
If enabled, ClickHouse will try to infer type
DateTime
or
DateTime64
from string fields in schema inference for text formats.
If all fields from a column in sample data were successfully parsed as datetimes, the result type will be
DateTime
or
DateTime64(9)
(if any datetime had fractional part),
if at least one field was not parsed as datetime, the result type will be
String
.
Enabled by default.
Examples
sql
SET input_format_try_infer_datetimes = 0;
DESC format(JSONEachRow, $$
{"datetime" : "2021-01-01 00:00:00", "datetime64" : "2021-01-01 00:00:00.000"}
{"datetime" : "2022-01-01 00:00:00", "datetime64" : "2022-01-01 00:00:00.000"}
$$)
response
ββnameββββββββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β datetime β Nullable(String) β β β β β β
β datetime64 β Nullable(String) β β β β β β
ββββββββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
SET input_format_try_infer_datetimes = 1;
DESC format(JSONEachRow, $$
{"datetime" : "2021-01-01 00:00:00", "datetime64" : "2021-01-01 00:00:00.000"}
{"datetime" : "2022-01-01 00:00:00", "datetime64" : "2022-01-01 00:00:00.000"}
$$)
response
ββnameββββββββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β datetime β Nullable(DateTime) β β β β β β
β datetime64 β Nullable(DateTime64(9)) β β β β β β
ββββββββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
DESC format(JSONEachRow, $$
{"datetime" : "2021-01-01 00:00:00", "datetime64" : "2021-01-01 00:00:00.000"}
{"datetime" : "unknown", "datetime64" : "unknown"}
$$) | {"source_file": "schema-inference.md"} | [
0.035057295113801956,
-0.024649757891893387,
0.0035586131270974874,
0.044642433524131775,
-0.009733851999044418,
0.015869509428739548,
-0.04461901634931564,
0.024581875652074814,
0.011142945848405361,
-0.01980479434132576,
-0.023474417626857758,
-0.077608123421669,
-0.04520729184150696,
0.... |
1b5a1161-5cb6-4783-a3d2-bb3c94df0df4 | response
ββnameββββββββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β datetime β Nullable(String) β β β β β β
β datetime64 β Nullable(String) β β β β β β
ββββββββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
input_format_try_infer_datetimes_only_datetime64 {#input-format-try-infer-datetimes-only-datetime64}
If enabled, ClickHouse will always infer
DateTime64(9)
when
input_format_try_infer_datetimes
is enabled even if datetime values don't contain fractional part.
Disabled by default.
Examples
sql
SET input_format_try_infer_datetimes = 1;
SET input_format_try_infer_datetimes_only_datetime64 = 1;
DESC format(JSONEachRow, $$
{"datetime" : "2021-01-01 00:00:00", "datetime64" : "2021-01-01 00:00:00.000"}
{"datetime" : "2022-01-01 00:00:00", "datetime64" : "2022-01-01 00:00:00.000"}
$$)
response
ββnameββββββββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β datetime β Nullable(DateTime64(9)) β β β β β β
β datetime64 β Nullable(DateTime64(9)) β β β β β β
ββββββββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Note: Parsing datetimes during schema inference respect setting
date_time_input_format
input_format_try_infer_dates {#input-format-try-infer-dates}
If enabled, ClickHouse will try to infer type
Date
from string fields in schema inference for text formats.
If all fields from a column in sample data were successfully parsed as dates, the result type will be
Date
,
if at least one field was not parsed as date, the result type will be
String
.
Enabled by default.
Examples
sql
SET input_format_try_infer_datetimes = 0, input_format_try_infer_dates = 0
DESC format(JSONEachRow, $$
{"date" : "2021-01-01"}
{"date" : "2022-01-01"}
$$)
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β date β Nullable(String) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
SET input_format_try_infer_dates = 1
DESC format(JSONEachRow, $$
{"date" : "2021-01-01"}
{"date" : "2022-01-01"}
$$) | {"source_file": "schema-inference.md"} | [
0.018572917208075523,
-0.04711604863405228,
-0.022240620106458664,
0.039748016744852066,
-0.017864637076854706,
0.04148603603243828,
-0.07928445190191269,
-0.0006118397577665746,
-0.0241362527012825,
-0.01546176616102457,
-0.027803491801023483,
-0.07566852122545242,
-0.059771906584501266,
... |
26c5ea32-f956-4675-86ac-1c9d7a102b1d | response
ββnameββ¬βtypeββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β date β Nullable(Date) β β β β β β
ββββββββ΄βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
sql
DESC format(JSONEachRow, $$
{"date" : "2021-01-01"}
{"date" : "unknown"}
$$)
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β date β Nullable(String) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
input_format_try_infer_exponent_floats {#input-format-try-infer-exponent-floats}
If enabled, ClickHouse will try to infer floats in exponential form for text formats (except JSON where numbers in exponential form are always inferred).
Disabled by default.
Example
sql
SET input_format_try_infer_exponent_floats = 1;
DESC format(CSV,
$$1.1E10
2.3e-12
42E00
$$)
response
ββnameββ¬βtypeβββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(Float64) β β β β β β
ββββββββ΄ββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Self describing formats {#self-describing-formats}
Self-describing formats contain information about the structure of the data in the data itself,
it can be some header with a description, a binary type tree, or some kind of table.
To automatically infer a schema from files in such formats, ClickHouse reads a part of the data containing
information about the types and converts it into a schema of the ClickHouse table.
Formats with -WithNamesAndTypes suffix {#formats-with-names-and-types}
ClickHouse supports some text formats with the suffix -WithNamesAndTypes. This suffix means that the data contains two additional rows with column names and types before the actual data.
While schema inference for such formats, ClickHouse reads the first two rows and extracts column names and types.
Example
sql
DESC format(TSVWithNamesAndTypes,
$$num str arr
UInt8 String Array(UInt8)
42 Hello, World! [1,2,3]
$$) | {"source_file": "schema-inference.md"} | [
-0.07392056286334991,
0.00312394043430686,
-0.11503008008003235,
0.06137213855981827,
-0.035323504358530045,
-0.036233898252248764,
-0.045249179005622864,
-0.010822895914316177,
-0.024174828082323074,
0.049919579178094864,
0.05893545597791672,
-0.1261749565601349,
0.03368687257170677,
0.04... |
9ff834cc-d57a-4114-bc38-e0edd1430ab8 | Example
sql
DESC format(TSVWithNamesAndTypes,
$$num str arr
UInt8 String Array(UInt8)
42 Hello, World! [1,2,3]
$$)
response
ββnameββ¬βtypeββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β num β UInt8 β β β β β β
β str β String β β β β β β
β arr β Array(UInt8) β β β β β β
ββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
JSON formats with metadata {#json-with-metadata}
Some JSON input formats (
JSON
,
JSONCompact
,
JSONColumnsWithMetadata
) contain metadata with column names and types.
In schema inference for such formats, ClickHouse reads this metadata.
Example
```sql
DESC format(JSON, $$
{
"meta":
[
{
"name": "num",
"type": "UInt8"
},
{
"name": "str",
"type": "String"
},
{
"name": "arr",
"type": "Array(UInt8)"
}
],
"data":
[
{
"num": 42,
"str": "Hello, World",
"arr": [1,2,3]
}
],
"rows": 1,
"statistics":
{
"elapsed": 0.005723915,
"rows_read": 1,
"bytes_read": 1
}
}
$$)
response
ββnameββ¬βtypeββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β num β UInt8 β β β β β β
β str β String β β β β β β
β arr β Array(UInt8) β β β β β β
ββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
```
Avro {#avro}
In Avro format ClickHouse reads its schema from the data and converts it to ClickHouse schema using the following type matches: | {"source_file": "schema-inference.md"} | [
0.009006238542497158,
-0.045570455491542816,
-0.07196010649204254,
0.019452383741736412,
-0.04096086695790291,
-0.03773561120033264,
-0.0012109893141314387,
-0.023676680400967598,
-0.019318880513310432,
0.02911500819027424,
-0.002856177743524313,
-0.046198952943086624,
-0.008749516680836678,... |
941b94d6-9941-498e-86e5-b20d78e79d96 | Avro {#avro}
In Avro format ClickHouse reads its schema from the data and converts it to ClickHouse schema using the following type matches:
| Avro data type | ClickHouse data type |
|------------------------------------|--------------------------------------------------------------------------------|
|
boolean
|
Bool
|
|
int
|
Int32
|
|
int (date)
* |
Date32
|
|
long
|
Int64
|
|
float
|
Float32
|
|
double
|
Float64
|
|
bytes
,
string
|
String
|
|
fixed
|
FixedString(N)
|
|
enum
|
Enum
|
|
array(T)
|
Array(T)
|
|
union(null, T)
,
union(T, null)
|
Nullable(T)
|
|
null
|
Nullable(Nothing)
|
|
string (uuid)
* |
UUID
|
|
binary (decimal)
* |
Decimal(P, S)
|
*
Avro logical types
Other Avro types are not supported.
Parquet {#parquet}
In Parquet format ClickHouse reads its schema from the data and converts it to ClickHouse schema using the following type matches:
| Parquet data type | ClickHouse data type |
|------------------------------|---------------------------------------------------------|
|
BOOL
|
Bool
|
|
UINT8
|
UInt8
|
|
INT8
|
Int8
|
|
UINT16
|
UInt16
|
|
INT16
|
Int16
|
|
UINT32
|
UInt32
|
|
INT32
|
Int32
|
|
UINT64
|
UInt64
|
|
INT64
|
Int64
|
|
FLOAT
|
Float32
|
|
DOUBLE
|
Float64
|
|
DATE
|
Date32
|
|
TIME (ms)
|
DateTime
|
|
TIMESTAMP
,
TIME (us, ns)
|
DateTime64
|
|
STRING
,
BINARY
|
String
|
|
DECIMAL
|
Decimal
|
|
LIST
|
Array
|
|
STRUCT
|
Tuple
|
|
MAP
|
Map
|
Other Parquet types are not supported.
Arrow {#arrow} | {"source_file": "schema-inference.md"} | [
0.030538471415638924,
-0.034652676433324814,
-0.11278066039085388,
0.018335511907935143,
-0.030934179201722145,
-0.00007114066829672083,
0.022056875750422478,
-0.011963669210672379,
-0.08167099207639694,
0.050765231251716614,
0.020577700808644295,
-0.10824310034513474,
-0.016168951988220215,... |
c12bc71f-b73b-4ed8-99cb-369620bc5e2f | Other Parquet types are not supported.
Arrow {#arrow}
In Arrow format ClickHouse reads its schema from the data and converts it to ClickHouse schema using the following type matches:
| Arrow data type | ClickHouse data type |
|---------------------------------|---------------------------------------------------------|
|
BOOL
|
Bool
|
|
UINT8
|
UInt8
|
|
INT8
|
Int8
|
|
UINT16
|
UInt16
|
|
INT16
|
Int16
|
|
UINT32
|
UInt32
|
|
INT32
|
Int32
|
|
UINT64
|
UInt64
|
|
INT64
|
Int64
|
|
FLOAT
,
HALF_FLOAT
|
Float32
|
|
DOUBLE
|
Float64
|
|
DATE32
|
Date32
|
|
DATE64
|
DateTime
|
|
TIMESTAMP
,
TIME32
,
TIME64
|
DateTime64
|
|
STRING
,
BINARY
|
String
|
|
DECIMAL128
,
DECIMAL256
|
Decimal
|
|
LIST
|
Array
|
|
STRUCT
|
Tuple
|
|
MAP
|
Map
|
Other Arrow types are not supported.
ORC {#orc}
In ORC format ClickHouse reads its schema from the data and converts it to ClickHouse schema using the following type matches:
| ORC data type | ClickHouse data type |
|--------------------------------------|---------------------------------------------------------|
|
Boolean
|
Bool
|
|
Tinyint
|
Int8
|
|
Smallint
|
Int16
|
|
Int
|
Int32
|
|
Bigint
|
Int64
|
|
Float
|
Float32
|
|
Double
|
Float64
|
|
Date
|
Date32
|
|
Timestamp
|
DateTime64
|
|
String
,
Char
,
Varchar
,
BINARY
|
String
|
|
Decimal
|
Decimal
|
|
List
|
Array
|
|
Struct
|
Tuple
|
|
Map
|
Map
|
Other ORC types are not supported.
Native {#native}
Native format is used inside ClickHouse and contains the schema in the data.
In schema inference, ClickHouse reads the schema from the data without any transformations.
Formats with external schema {#formats-with-external-schema} | {"source_file": "schema-inference.md"} | [
-0.00472963647916913,
-0.05756299942731857,
-0.03143252059817314,
-0.06417271494865417,
-0.09090043604373932,
-0.029177231714129448,
0.004583689849823713,
-0.003614871995523572,
-0.04856764152646065,
-0.006631724536418915,
0.051863688975572586,
-0.09503941982984543,
-0.027137471362948418,
... |
143e54bb-a595-4197-ba8d-1081f99375e9 | Formats with external schema {#formats-with-external-schema}
Such formats require a schema describing the data in a separate file in a specific schema language.
To automatically infer a schema from files in such formats, ClickHouse reads external schema from a separate file and transforms it to a ClickHouse table schema.
Protobuf {#protobuf}
In schema inference for Protobuf format ClickHouse uses the following type matches:
| Protobuf data type | ClickHouse data type |
|-------------------------------|---------------------------------------------------|
|
bool
|
UInt8
|
|
float
|
Float32
|
|
double
|
Float64
|
|
int32
,
sint32
,
sfixed32
|
Int32
|
|
int64
,
sint64
,
sfixed64
|
Int64
|
|
uint32
,
fixed32
|
UInt32
|
|
uint64
,
fixed64
|
UInt64
|
|
string
,
bytes
|
String
|
|
enum
|
Enum
|
|
repeated T
|
Array(T)
|
|
message
,
group
|
Tuple
|
CapnProto {#capnproto}
In schema inference for CapnProto format ClickHouse uses the following type matches:
| CapnProto data type | ClickHouse data type |
|------------------------------------|--------------------------------------------------------|
|
Bool
|
UInt8
|
|
Int8
|
Int8
|
|
UInt8
|
UInt8
|
|
Int16
|
Int16
|
|
UInt16
|
UInt16
|
|
Int32
|
Int32
|
|
UInt32
|
UInt32
|
|
Int64
|
Int64
|
|
UInt64
|
UInt64
|
|
Float32
|
Float32
|
|
Float64
|
Float64
|
|
Text
,
Data
|
String
|
|
enum
|
Enum
|
|
List
|
Array
|
|
struct
|
Tuple
|
|
union(T, Void)
,
union(Void, T)
|
Nullable(T)
|
Strong-typed binary formats {#strong-typed-binary-formats}
In such formats, each serialized value contains information about its type (and possibly about its name), but there is no information about the whole table.
In schema inference for such formats, ClickHouse reads data row by row (up to
input_format_max_rows_to_read_for_schema_inference
rows or
input_format_max_bytes_to_read_for_schema_inference
bytes) and extracts
the type (and possibly name) for each value from the data and then converts these types to ClickHouse types.
MsgPack {#msgpack} | {"source_file": "schema-inference.md"} | [
0.011034054681658745,
-0.047897446900606155,
-0.06920402497053146,
-0.03622639551758766,
0.009383639320731163,
-0.036705099046230316,
-0.011937040835618973,
0.03715359792113304,
-0.04396801069378853,
0.04309680312871933,
0.020489465445280075,
-0.1594153791666031,
-0.018651209771633148,
0.0... |
e6264ba1-20f4-4980-9570-74476af2d165 | MsgPack {#msgpack}
In MsgPack format there is no delimiter between rows, to use schema inference for this format you should specify the number of columns in the table
using the setting
input_format_msgpack_number_of_columns
. ClickHouse uses the following type matches:
| MessagePack data type (
INSERT
) | ClickHouse data type |
|--------------------------------------------------------------------|-----------------------------------------------------------|
|
int N
,
uint N
,
negative fixint
,
positive fixint
|
Int64
|
|
bool
|
UInt8
|
|
fixstr
,
str 8
,
str 16
,
str 32
,
bin 8
,
bin 16
,
bin 32
|
String
|
|
float 32
|
Float32
|
|
float 64
|
Float64
|
|
uint 16
|
Date
|
|
uint 32
|
DateTime
|
|
uint 64
|
DateTime64
|
|
fixarray
,
array 16
,
array 32
|
Array
|
|
fixmap
,
map 16
,
map 32
|
Map
|
By default, all inferred types are inside
Nullable
, but it can be changed using the setting
schema_inference_make_columns_nullable
.
BSONEachRow {#bsoneachrow}
In BSONEachRow each row of data is presented as a BSON document. In schema inference ClickHouse reads BSON documents one by one and extracts
values, names, and types from the data and then transforms these types to ClickHouse types using the following type matches: | {"source_file": "schema-inference.md"} | [
0.059107474982738495,
-0.016587046906352043,
-0.05038159713149071,
-0.014406198635697365,
-0.05368240550160408,
-0.06408978253602982,
-0.011537458747625351,
-0.0025017838925123215,
-0.08349043130874634,
0.042001236230134964,
0.07017676532268524,
-0.08738776296377182,
0.08304163813591003,
-... |
cfee60ab-7629-45c0-8652-a166fe1dd389 | | BSON Type | ClickHouse type |
|-----------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|
|
\x08
boolean |
Bool
|
|
\x10
int32 |
Int32
|
|
\x12
int64 |
Int64
|
|
\x01
double |
Float64
|
|
\x09
datetime |
DateTime64
|
|
\x05
binary with
\x00
binary subtype,
\x02
string,
\x0E
symbol,
\x0D
JavaScript code |
String
|
|
\x07
ObjectId, |
FixedString(12)
|
|
\x05
binary with
\x04
uuid subtype, size = 16 |
UUID
|
|
\x04
array |
Array
/
Tuple
(if nested types are different) |
|
\x03
document |
Named Tuple
/
Map
(with String keys) |
By default, all inferred types are inside
Nullable
, but it can be changed using the setting
schema_inference_make_columns_nullable
.
Formats with constant schema {#formats-with-constant-schema}
Data in such formats always have the same schema.
LineAsString {#line-as-string}
In this format, ClickHouse reads the whole line from the data into a single column with
String
data type. The inferred type for this format is always
String
and the column name is
line
.
Example
sql
DESC format(LineAsString, 'Hello\nworld!') | {"source_file": "schema-inference.md"} | [
0.0010622387053444982,
-0.022973982617259026,
-0.10073035955429077,
-0.005298927426338196,
-0.060799870640039444,
-0.03557923063635826,
0.06204167380928993,
-0.010578184388577938,
-0.07829971611499786,
-0.07760269939899445,
0.04050199314951897,
-0.03638915345072746,
0.010636803694069386,
-... |
404f98f1-af5b-4af9-ad5c-ec249d08f99a | Example
sql
DESC format(LineAsString, 'Hello\nworld!')
response
ββnameββ¬βtypeββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β line β String β β β β β β
ββββββββ΄βββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
JSONAsString {#json-as-string}
In this format, ClickHouse reads the whole JSON object from the data into a single column with
String
data type. The inferred type for this format is always
String
and the column name is
json
.
Example
sql
DESC format(JSONAsString, '{"x" : 42, "y" : "Hello, World!"}')
response
ββnameββ¬βtypeββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β json β String β β β β β β
ββββββββ΄βββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
JSONAsObject {#json-as-object}
In this format, ClickHouse reads the whole JSON object from the data into a single column with
Object('json')
data type. Inferred type for this format is always
String
and the column name is
json
.
Note: This format works only if
allow_experimental_object_type
is enabled.
Example
sql
DESC format(JSONAsString, '{"x" : 42, "y" : "Hello, World!"}') SETTINGS allow_experimental_object_type=1
response
ββnameββ¬βtypeββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β json β Object('json') β β β β β β
ββββββββ΄βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Schema inference modes {#schema-inference-modes}
Schema inference from the set of data files can work in 2 different modes:
default
and
union
.
The mode is controlled by the setting
schema_inference_mode
.
Default mode {#default-schema-inference-mode}
In default mode, ClickHouse assumes that all files have the same schema and tries to infer the schema by reading files one by one until it succeeds.
Example:
Let's say we have 3 files
data1.jsonl
,
data2.jsonl
and
data3.jsonl
with the next content:
data1.jsonl
:
json
{"field1" : 1, "field2" : null}
{"field1" : 2, "field2" : null}
{"field1" : 3, "field2" : null}
data2.jsonl
:
json
{"field1" : 4, "field2" : "Data4"}
{"field1" : 5, "field2" : "Data5"}
{"field1" : 6, "field2" : "Data5"}
data3.jsonl
:
json
{"field1" : 7, "field2" : "Data7", "field3" : [1, 2, 3]}
{"field1" : 8, "field2" : "Data8", "field3" : [4, 5, 6]}
{"field1" : 9, "field2" : "Data9", "field3" : [7, 8, 9]}
Let's try to use schema inference on these 3 files:
sql
:) DESCRIBE file('data{1,2,3}.jsonl') SETTINGS schema_inference_mode='default'
Result: | {"source_file": "schema-inference.md"} | [
0.004855332430452108,
-0.053821176290512085,
-0.041650332510471344,
0.0388047881424427,
-0.09118922799825668,
-0.0802517756819725,
0.009959609247744083,
-0.003833898575976491,
-0.0584062784910202,
0.0011171820806339383,
0.020230570808053017,
-0.03161323070526123,
-0.014706986024975777,
0.0... |
42cd9092-fc28-4cb2-ba0f-8a47472aacf0 | Let's try to use schema inference on these 3 files:
sql
:) DESCRIBE file('data{1,2,3}.jsonl') SETTINGS schema_inference_mode='default'
Result:
response
ββnameββββ¬βtypeββββββββββββββ
β field1 β Nullable(Int64) β
β field2 β Nullable(String) β
ββββββββββ΄βββββββββββββββββββ
As we can see, we don't have
field3
from file
data3.jsonl
.
It happens because ClickHouse first tried to infer schema from file
data1.jsonl
, failed because of only nulls for field
field2
,
and then tried to infer schema from
data2.jsonl
and succeeded, so data from file
data3.jsonl
wasn't read.
Union mode {#default-schema-inference-mode-1}
In union mode, ClickHouse assumes that files can have different schemas, so it infer schemas of all files and then union them to the common schema.
Let's say we have 3 files
data1.jsonl
,
data2.jsonl
and
data3.jsonl
with the next content:
data1.jsonl
:
json
{"field1" : 1}
{"field1" : 2}
{"field1" : 3}
data2.jsonl
:
json
{"field2" : "Data4"}
{"field2" : "Data5"}
{"field2" : "Data5"}
data3.jsonl
:
json
{"field3" : [1, 2, 3]}
{"field3" : [4, 5, 6]}
{"field3" : [7, 8, 9]}
Let's try to use schema inference on these 3 files:
sql
:) DESCRIBE file('data{1,2,3}.jsonl') SETTINGS schema_inference_mode='union'
Result:
response
ββnameββββ¬βtypeββββββββββββββββββββ
β field1 β Nullable(Int64) β
β field2 β Nullable(String) β
β field3 β Array(Nullable(Int64)) β
ββββββββββ΄βββββββββββββββββββββββββ
As we can see, we have all fields from all files.
Note:
- As some of the files may not contain some columns from the resulting schema, union mode is supported only for formats that support reading subset of columns (like JSONEachRow, Parquet, TSVWithNames, etc) and won't work for other formats (like CSV, TSV, JSONCompactEachRow, etc).
- If ClickHouse cannot infer the schema from one of the files, the exception will be thrown.
- If you have a lot of files, reading schema from all of them can take a lot of time.
Automatic format detection {#automatic-format-detection}
If data format is not specified and cannot be determined by the file extension, ClickHouse will try to detect the file format by its content.
Examples:
Let's say we have
data
with the following content:
csv
"a","b"
1,"Data1"
2,"Data2"
3,"Data3"
We can inspect and query this file without specifying format or structure:
sql
:) desc file(data);
repsonse
ββnameββ¬βtypeββββββββββββββ
β a β Nullable(Int64) β
β b β Nullable(String) β
ββββββββ΄βββββββββββββββββββ
sql
:) select * from file(data);
response
ββaββ¬βbββββββ
β 1 β Data1 β
β 2 β Data2 β
β 3 β Data3 β
βββββ΄ββββββββ
:::note
ClickHouse can detect only some subset of formats and this detection takes some time, it's always better to specify the format explicitly.
::: | {"source_file": "schema-inference.md"} | [
-0.07978841662406921,
-0.03872978687286377,
-0.017852723598480225,
0.02011430636048317,
0.027571042999625206,
-0.0481700599193573,
-0.028895627707242966,
-0.006399270612746477,
-0.032695453613996506,
-0.0004419046454131603,
0.06022854894399643,
-0.01848711259663105,
0.004371292889118195,
0... |
f59d0106-8337-4ce6-8ff5-880ea51469a6 | description: 'Native clients and interfaces for ClickHouse'
keywords: ['clients', 'interfaces', 'CLI', 'SQL console', 'drivers']
slug: /interfaces/natives-clients-and-interfaces
title: 'Native Clients and Interfaces'
doc_type: 'landing-page'
Native clients & interfaces
ClickHouse provides a number of different native clients and interfaces that allow you to connect to ClickHouse.
For more information see the pages below:
| Section | Summary |
|--------------------------------------------------------------|-------------------------------------------------------------------------------------|
|
Command-Line Client
| Native command-line client supporting command-line options and configuration files. |
|
Drivers & Interfaces
| A number of network interfaces, libraries and visual interfaces. |
|
SQL Console
| A fast and easy way to interact with your data in ClickHouse Cloud. | | {"source_file": "native-clients-interfaces-index.md"} | [
0.020451294258236885,
-0.09207846224308014,
-0.050258226692676544,
-0.06055751070380211,
-0.05138043686747551,
-0.030983220785856247,
0.025139261037111282,
-0.01209076028317213,
-0.0769757479429245,
-0.03157997503876686,
-0.000874003570061177,
-0.007561327423900366,
0.027340685948729515,
0... |
06daa6e2-7980-4297-a85a-1d9bd9bb5475 | slug: /managing-data/deleting-data/overview
title: 'Deleting Data'
description: 'How to delete data in ClickHouse Table Of Contents'
keywords: ['delete', 'truncate', 'drop', 'lightweight delete']
doc_type: 'guide'
In this section of the documentation,
we will explore how to delete data in ClickHouse.
| Page | Description |
|-------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|
|
Overview
| Provides an overview of the various ways to delete data in ClickHouse. |
|
Lightweight deletes
| Learn how to use the Lightweight Delete to delete data. |
|
Delete mutations
|Learn about Delete Mutations. |
|
Truncate table
| Learn about how to use Truncate, which allows the data in a table or database to be removed, while preserving its existence. |
|
Drop partitions
| Learn about Dropping Partitions in ClickHouse. | | {"source_file": "index.md"} | [
0.01873289979994297,
0.034092653542757034,
0.027982203289866447,
0.06384643167257309,
0.045274797827005386,
-0.05440644547343254,
0.019625376909971237,
-0.09481807053089142,
0.027735810726881027,
0.0487484484910965,
0.0363340824842453,
0.0163352619856596,
0.07067718356847763,
-0.0569162741... |
a047d49f-8f1d-40cf-b76e-a0e9c215639c | slug: /updating-data
title: 'Updating Data'
description: 'Updating Data Table Of Contents'
keywords: ['update', 'updating data']
doc_type: 'landing-page'
In this section of the documentation, you will learn how you can update your data.
| Page | Description |
|-------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Overview
| Provides an overview of the differences in updating data between ClickHouse and OLTP databases, as well as the various methods available to do so in ClickHouse. |
|
Update mutations
| Learn how to update using Update Mutations. |
|
Lightweight updates
| Learn how to update using Lightweight Updates. |
|
ReplacingMergeTree
| Learn how to update using the ReplacingMergeTree. | | {"source_file": "index.md"} | [
0.022098036482930183,
-0.011119566857814789,
0.07102348655462265,
0.020946407690644264,
-0.011310099624097347,
-0.0014224323676899076,
-0.0308675579726696,
-0.02846604399383068,
0.009389879181981087,
0.04656099155545235,
0.007038254756480455,
-0.043983954936265945,
0.04779735952615738,
-0.... |
7a9ab19b-5a1d-4fa0-bd5b-30cf9e4c339d | slug: /parts
title: 'Table parts'
description: 'What are data parts in ClickHouse'
keywords: ['part']
doc_type: 'reference'
import merges from '@site/static/images/managing-data/core-concepts/merges.png';
import part from '@site/static/images/managing-data/core-concepts/part.png';
import Image from '@theme/IdealImage';
What are table parts in ClickHouse? {#what-are-table-parts-in-clickhouse}
The data from each table in the ClickHouse
MergeTree engine family
is organized on disk as a collection of immutable
data parts
.
To illustrate this, we use
this
table (adapted from the
UK property prices dataset
) tracking the date, town, street, and price for sold properties in the United Kingdom:
sql
CREATE TABLE uk.uk_price_paid_simple
(
date Date,
town LowCardinality(String),
street LowCardinality(String),
price UInt32
)
ENGINE = MergeTree
ORDER BY (town, street);
You can
query this table
in our ClickHouse SQL Playground.
A data part is created whenever a set of rows is inserted into the table. The following diagram sketches this:
When a ClickHouse server processes the example insert with 4 rows (e.g., via an
INSERT INTO statement
) sketched in the diagram above, it performs several steps:
β
Sorting
: The rows are sorted by the table's ^^sorting key^^
(town, street)
, and a
sparse primary index
is generated for the sorted rows.
β‘
Splitting
: The sorted data is split into columns.
β’
Compression
: Each column is
compressed
.
β£
Writing to Disk
: The compressed columns are saved as binary column files within a new directory representing the insert's data part. The sparse primary index is also compressed and stored in the same directory.
Depending on the table's specific engine, additional transformations
may
take place alongside sorting.
Data ^^parts^^ are self-contained, including all metadata needed to interpret their contents without requiring a central catalog. Beyond the sparse primary index, ^^parts^^ contain additional metadata, such as secondary
data skipping indexes
,
column statistics
, checksums, min-max indexes (if
partitioning
is used), and
more
.
Part merges {#part-merges}
To manage the number of ^^parts^^ per table, a
background merge
job periodically combines smaller ^^parts^^ into larger ones until they reach a
configurable
compressed size (typically ~150 GB). Merged ^^parts^^ are marked as inactive and deleted after a
configurable
time interval. Over time, this process creates a hierarchical structure of merged ^^parts^^, which is why it's called a ^^MergeTree^^ table: | {"source_file": "parts.md"} | [
0.05135375261306763,
-0.005945042707026005,
0.05388578772544861,
0.0514187328517437,
-0.014186402782797813,
-0.07904085516929626,
0.03804445639252663,
0.011393323540687561,
-0.04572494700551033,
0.00713544012978673,
0.054923851042985916,
-0.021656230092048645,
0.07868386059999466,
-0.02975... |
41cbd26e-26c8-4f26-86fe-73e181802bcc | To minimize the number of initial ^^parts^^ and the overhead of merges, database clients are
encouraged
to either insert tuples in bulk, e.g. 20,000 rows at once, or to use the
asynchronous insert mode
, in which ClickHouse buffers rows from multiple incoming INSERTs into the same table and creates a new part only after the buffer size exceeds a configurable threshold, or a timeout expires.
Monitoring table parts {#monitoring-table-parts}
You can
query
the list of all currently existing active ^^parts^^ of our example table by using the
virtual column
_part
:
```sql
SELECT _part
FROM uk.uk_price_paid_simple
GROUP BY _part
ORDER BY _part ASC;
ββ_partββββββββ
1. β all_0_5_1 β
2. β all_12_17_1 β
3. β all_18_23_1 β
4. β all_6_11_1 β
βββββββββββββββ
```
The query above retrieves the names of directories on disk, with each directory representing an active data part of the table. The components of these directory names have specific meanings, which are documented
here
for those interested in exploring further.
Alternatively, ClickHouse tracks info for all ^^parts^^ of all tables in the
system.parts
system table, and the following query
returns
for our example table above the list of all currently active ^^parts^^, their merge level, and the number of rows stored in these ^^parts^^:
``sql
SELECT
name,
level,
rows
FROM system.parts
WHERE (database = 'uk') AND (
table` = 'uk_price_paid_simple') AND active
ORDER BY name ASC;
ββnameβββββββββ¬βlevelββ¬ββββrowsββ
1. β all_0_5_1 β 1 β 6368414 β
2. β all_12_17_1 β 1 β 6442494 β
3. β all_18_23_1 β 1 β 5977762 β
4. β all_6_11_1 β 1 β 6459763 β
βββββββββββββββ΄ββββββββ΄ββββββββββ
```
The merge level is incremented by one with each additional merge on the part. A level of 0 indicates this is a new part that has not been merged yet. | {"source_file": "parts.md"} | [
0.01589370332658291,
-0.11490462720394135,
-0.04297635704278946,
0.06282900273799896,
0.004687800537794828,
-0.10688626766204834,
0.03530893847346306,
-0.02122158743441105,
-0.02135266177356243,
0.02140769548714161,
0.09489661455154419,
-0.014778639189898968,
0.04208451882004738,
-0.087004... |
e2b1516e-622b-4cab-8142-9a05f703abd1 | slug: /managing-data/core-concepts
title: 'Core Concepts'
description: 'Learn Core Concepts of how ClickHouse works'
keywords: ['concepts', 'part', 'partition', 'primary index']
doc_type: 'guide'
In this section of the documentation,
you will learn some of the core concepts of how ClickHouse works.
| Page | Description |
|----------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Table parts
| Learn what table parts are in ClickHouse. |
|
Table partitions
| Learn what table partitions are and what they are used for. |
|
Table part merges
| Learn what table part merges are and what they are used for. |
|
Table shards and replicas
| Learn what table shards and replicas are and what they are used for. |
|
Primary indexes
| Introduces ClickHouse's sparse primary index and how it helps efficiently skip unnecessary data during query execution. Explains how the index is built and used, with examples and tools for observing its effect. Links to a deep dive for advanced use cases and best practices. |
|
Architectural Overview
| A concise academic overview of all components of the ClickHouse architecture, based on our VLDB 2024 scientific paper. | | {"source_file": "index.md"} | [
0.02756577916443348,
-0.016585497185587883,
-0.023940788581967354,
0.031557075679302216,
-0.006633393000811338,
-0.0106571726500988,
0.005482437554746866,
-0.005924961529672146,
-0.04580229893326759,
0.008970679715275764,
0.030129794031381607,
0.0082489512860775,
0.04629035294055939,
-0.05... |
df6d8957-552d-415d-82c4-9a23531960d9 | slug: /architecture/replication
sidebar_label: 'Replication'
sidebar_position: 10
title: 'Replicating data'
description: 'Page describing an example architecture with five servers configured. Two are used to host copies of the data and the rest are used to coordinate the replication of data'
doc_type: 'guide'
keywords: ['replication', 'high availability', 'cluster setup', 'data redundancy', 'fault tolerance']
import Image from '@theme/IdealImage';
import ReplicationShardingTerminology from '@site/docs/_snippets/_replication-sharding-terminology.md';
import ReplicationArchitecture from '@site/static/images/deployment-guides/replication-sharding-examples/replication.png';
import ConfigFileNote from '@site/docs/_snippets/_config-files.md';
import KeeperConfigFileNote from '@site/docs/_snippets/_keeper-config-files.md';
import ConfigExplanation from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_config_explanation.mdx';
import ListenHost from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_listen_host.mdx';
import ServerParameterTable from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_server_parameter_table.mdx';
import KeeperConfig from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_keeper_config.mdx';
import KeeperConfigExplanation from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_keeper_explanation.mdx';
import VerifyKeeperStatus from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_verify_keeper_using_mntr.mdx';
import DedicatedKeeperServers from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_dedicated_keeper_servers.mdx';
import ExampleFiles from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_working_example.mdx';
import CloudTip from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_cloud_tip.mdx';
In this example, you'll learn how to set up a simple ClickHouse cluster which
replicates the data. There are five servers configured. Two are used to host
copies of the data. The other three servers are used to coordinate the replication
of data.
The architecture of the cluster you will be setting up is shown below:
Prerequisites {#pre-requisites}
You've set up a
local ClickHouse server
before
You are familiar with basic configuration concepts of ClickHouse such as
configuration files
You have docker installed on your machine
Set up directory structure and test environment {#set-up}
In this tutorial, you will use
Docker compose
to
set up the ClickHouse cluster. This setup could be modified to work
for separate local machines, virtual machines or cloud instances as well.
Run the following commands to set up the directory structure for this example:
```bash
mkdir cluster_1S_2R
cd cluster_1S_2R
Create clickhouse-keeper directories | {"source_file": "01_1_shard_2_replicas.md"} | [
0.005313082132488489,
-0.024941058829426765,
-0.052749041467905045,
-0.018015244975686073,
-0.0062433634884655476,
-0.040206052362918854,
-0.08905455470085144,
-0.007907324470579624,
-0.031111324205994606,
0.06275654584169388,
0.07263603806495667,
0.022522836923599243,
0.1615809053182602,
... |
18874697-31a6-474d-9306-1c056919967f | Run the following commands to set up the directory structure for this example:
```bash
mkdir cluster_1S_2R
cd cluster_1S_2R
Create clickhouse-keeper directories
for i in {01..03}; do
mkdir -p fs/volumes/clickhouse-keeper-${i}/etc/clickhouse-keeper
done
Create clickhouse-server directories
for i in {01..02}; do
mkdir -p fs/volumes/clickhouse-${i}/etc/clickhouse-server
done
```
Add the following
docker-compose.yml
file to the
cluster_1S_2R
directory:
yaml title="docker-compose.yml"
version: '3.8'
services:
clickhouse-01:
image: "clickhouse/clickhouse-server:latest"
user: "101:101"
container_name: clickhouse-01
hostname: clickhouse-01
volumes:
- ${PWD}/fs/volumes/clickhouse-01/etc/clickhouse-server/config.d/config.xml:/etc/clickhouse-server/config.d/config.xml
- ${PWD}/fs/volumes/clickhouse-01/etc/clickhouse-server/users.d/users.xml:/etc/clickhouse-server/users.d/users.xml
ports:
- "127.0.0.1:8123:8123"
- "127.0.0.1:9000:9000"
depends_on:
- clickhouse-keeper-01
- clickhouse-keeper-02
- clickhouse-keeper-03
clickhouse-02:
image: "clickhouse/clickhouse-server:latest"
user: "101:101"
container_name: clickhouse-02
hostname: clickhouse-02
volumes:
- ${PWD}/fs/volumes/clickhouse-02/etc/clickhouse-server/config.d/config.xml:/etc/clickhouse-server/config.d/config.xml
- ${PWD}/fs/volumes/clickhouse-02/etc/clickhouse-server/users.d/users.xml:/etc/clickhouse-server/users.d/users.xml
ports:
- "127.0.0.1:8124:8123"
- "127.0.0.1:9001:9000"
depends_on:
- clickhouse-keeper-01
- clickhouse-keeper-02
- clickhouse-keeper-03
clickhouse-keeper-01:
image: "clickhouse/clickhouse-keeper:latest-alpine"
user: "101:101"
container_name: clickhouse-keeper-01
hostname: clickhouse-keeper-01
volumes:
- ${PWD}/fs/volumes/clickhouse-keeper-01/etc/clickhouse-keeper/keeper_config.xml:/etc/clickhouse-keeper/keeper_config.xml
ports:
- "127.0.0.1:9181:9181"
clickhouse-keeper-02:
image: "clickhouse/clickhouse-keeper:latest-alpine"
user: "101:101"
container_name: clickhouse-keeper-02
hostname: clickhouse-keeper-02
volumes:
- ${PWD}/fs/volumes/clickhouse-keeper-02/etc/clickhouse-keeper/keeper_config.xml:/etc/clickhouse-keeper/keeper_config.xml
ports:
- "127.0.0.1:9182:9181"
clickhouse-keeper-03:
image: "clickhouse/clickhouse-keeper:latest-alpine"
user: "101:101"
container_name: clickhouse-keeper-03
hostname: clickhouse-keeper-03
volumes:
- ${PWD}/fs/volumes/clickhouse-keeper-03/etc/clickhouse-keeper/keeper_config.xml:/etc/clickhouse-keeper/keeper_config.xml
ports:
- "127.0.0.1:9183:9181"
Create the following sub-directories and files: | {"source_file": "01_1_shard_2_replicas.md"} | [
0.045695409178733826,
-0.0806901603937149,
-0.013849202543497086,
-0.023341674357652664,
-0.02077459916472435,
-0.04831603541970253,
-0.0003000720462296158,
0.0016967732226476073,
0.01051289401948452,
0.011005057953298092,
0.021665915846824646,
-0.07023193687200546,
0.030492961406707764,
0... |
35c96677-8017-4bd7-acc2-a90480e95e30 | Create the following sub-directories and files:
bash
for i in {01..02}; do
mkdir -p fs/volumes/clickhouse-${i}/etc/clickhouse-server/config.d
mkdir -p fs/volumes/clickhouse-${i}/etc/clickhouse-server/users.d
touch fs/volumes/clickhouse-${i}/etc/clickhouse-server/config.d/config.xml
touch fs/volumes/clickhouse-${i}/etc/clickhouse-server/users.d/users.xml
done
Configure ClickHouse nodes {#configure-clickhouse-servers}
Server setup {#server-setup}
Now modify each empty configuration file
config.xml
located at
fs/volumes/clickhouse-{}/etc/clickhouse-server/config.d
. The lines which are
highlighted below need to be changed to be specific to each node:
xml
<clickhouse replace="true">
<logger>
<level>debug</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>1000M</size>
<count>3</count>
</logger>
<!--highlight-next-line-->
<display_name>cluster_1S_2R node 1</display_name>
<listen_host>0.0.0.0</listen_host>
<http_port>8123</http_port>
<tcp_port>9000</tcp_port>
<user_directories>
<users_xml>
<path>users.xml</path>
</users_xml>
<local_directory>
<path>/var/lib/clickhouse/access/</path>
</local_directory>
</user_directories>
<distributed_ddl>
<path>/clickhouse/task_queue/ddl</path>
</distributed_ddl>
<remote_servers>
<cluster_1S_2R>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>clickhouse-01</host>
<port>9000</port>
</replica>
<replica>
<host>clickhouse-02</host>
<port>9000</port>
</replica>
</shard>
</cluster_1S_2R>
</remote_servers>
<zookeeper>
<node>
<host>clickhouse-keeper-01</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-02</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-03</host>
<port>9181</port>
</node>
</zookeeper>
<!--highlight-start-->
<macros>
<shard>01</shard>
<replica>01</replica>
<cluster>cluster_1S_2R</cluster>
</macros>
<!--highlight-end-->
</clickhouse> | {"source_file": "01_1_shard_2_replicas.md"} | [
0.037050750106573105,
-0.06561332941055298,
0.0013339982833713293,
-0.029198141768574715,
-0.02641979418694973,
-0.05981382355093956,
0.061453450471162796,
-0.010192080400884151,
-0.03641747683286667,
0.0034701910335570574,
0.04828159883618355,
-0.013642208650708199,
0.047137174755334854,
... |
d16dd31e-1dc8-4e5e-9173-509fc63a157c | | Directory | File |
|-----------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
fs/volumes/clickhouse-01/etc/clickhouse-server/config.d
|
config.xml
|
|
fs/volumes/clickhouse-02/etc/clickhouse-server/config.d
|
config.xml
|
Each section of the above configuration file is explained in more detail below.
Networking and logging {#networking}
Logging is defined in the
<logger>
block. This example configuration gives
you a debug log that will roll over at 1000M three times:
xml
<logger>
<level>debug</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>1000M</size>
<count>3</count>
</logger>
For more information on logging configuration, see the comments included in the
default ClickHouse
configuration file
.
Cluster configuration {#cluster-configuration}
Configuration for the cluster is set up in the
<remote_servers>
block.
Here the cluster name
cluster_1S_2R
is defined.
The
<cluster_1S_2R></cluster_1S_2R>
block defines the layout of the cluster,
using the
<shard></shard>
and
<replica></replica>
settings, and acts as a
template for distributed DDL queries, which are queries that execute across the
cluster using the
ON CLUSTER
clause. By default, distributed DDL queries
are allowed, but can also be turned off with setting
allow_distributed_ddl_queries
.
internal_replication
is set to true so that data is written to just one of the replicas.
xml
<remote_servers>
<!-- cluster name (should not contain dots) -->
<cluster_1S_2R>
<!-- <allow_distributed_ddl_queries>false</allow_distributed_ddl_queries> -->
<shard>
<!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
<internal_replication>true</internal_replication>
<replica>
<host>clickhouse-01</host>
<port>9000</port>
</replica>
<replica>
<host>clickhouse-02</host>
<port>9000</port>
</replica>
</shard>
</cluster_1S_2R>
</remote_servers>
Keeper configuration {#keeper-config-explanation}
The
<ZooKeeper>
section tells ClickHouse where ClickHouse Keeper (or ZooKeeper) is running.
As we are using a ClickHouse Keeper cluster, each
<node>
of the cluster needs to be specified,
along with its hostname and port number using the
<host>
and
<port>
tags respectively. | {"source_file": "01_1_shard_2_replicas.md"} | [
-0.048409972339868546,
0.05409681424498558,
-0.09214160591363907,
-0.02916708216071129,
-0.02550552226603031,
-0.020091354846954346,
0.038994669914245605,
0.08374205231666565,
0.015796713531017303,
-0.004618654493242502,
0.11647093296051025,
0.04473418369889259,
-0.010443735867738724,
-0.0... |
0c18e3a8-5dd8-47a7-ac0f-85092f4f7d14 | Set up of ClickHouse Keeper is explained in the next step of the tutorial.
xml
<zookeeper>
<node>
<host>clickhouse-keeper-01</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-02</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-03</host>
<port>9181</port>
</node>
</zookeeper>
:::note
Although it is possible to run ClickHouse Keeper on the same server as ClickHouse Server,
in production environments we strongly recommend that ClickHouse Keeper runs on dedicated hosts.
:::
Macros configuration {#macros-config-explanation}
Additionally, the
<macros>
section is used to define parameter substitutions for
replicated tables. These are listed in
system.macros
and allow using substitutions
like
{shard}
and
{replica}
in queries.
xml
<macros>
<shard>01</shard>
<replica>01</replica>
<cluster>cluster_1S_2R</cluster>
</macros>
:::note
These will be defined uniquely depending on the layout of the cluster.
:::
User configuration {#user-config}
Now modify each empty configuration file
users.xml
located at
fs/volumes/clickhouse-{}/etc/clickhouse-server/users.d
with the following:
```xml title="/users.d/users.xml"
10000000000
0
in_order
1
1
default
::/0
default
1
1
1
1
3600
0
0
0
0
0
```
| Directory | File |
|-----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
fs/volumes/clickhouse-01/etc/clickhouse-server/users.d
|
users.xml
|
|
fs/volumes/clickhouse-02/etc/clickhouse-server/users.d
|
users.xml
|
In this example, the default user is configured without a password for simplicity.
In practice, this is discouraged.
:::note
In this example, each
users.xml
file is identical for all nodes in the cluster.
:::
Configure ClickHouse Keeper {#configure-clickhouse-keeper-nodes}
Keeper setup {#configuration-explanation} | {"source_file": "01_1_shard_2_replicas.md"} | [
-0.0011675296118482947,
-0.048801131546497345,
-0.05688951164484024,
-0.006139232777059078,
-0.0233139730989933,
-0.07815349102020264,
0.001547943684272468,
-0.05500998720526695,
-0.03610283508896828,
0.05466328561306,
0.022124459967017174,
-0.05705857276916504,
0.10539157688617706,
-0.034... |
5efee2f1-6f5b-4310-80aa-e52a0f9426f1 | Configure ClickHouse Keeper {#configure-clickhouse-keeper-nodes}
Keeper setup {#configuration-explanation}
| Directory | File |
|------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
fs/volumes/clickhouse-keeper-01/etc/clickhouse-keeper
|
keeper_config.xml
|
|
fs/volumes/clickhouse-keeper-02/etc/clickhouse-keeper
|
keeper_config.xml
|
|
fs/volumes/clickhouse-keeper-03/etc/clickhouse-keeper
|
keeper_config.xml
|
Test the setup {#test-the-setup}
Make sure that docker is running on your machine.
Start the cluster using the
docker-compose up
command from the root of the
cluster_1S_2R
directory:
bash
docker-compose up -d
You should see docker begin to pull the ClickHouse and Keeper images,
and then start the containers:
bash
[+] Running 6/6
β Network cluster_1s_2r_default Created
β Container clickhouse-keeper-03 Started
β Container clickhouse-keeper-02 Started
β Container clickhouse-keeper-01 Started
β Container clickhouse-01 Started
β Container clickhouse-02 Started
To verify that the cluster is running, connect to either
clickhouse-01
or
clickhouse-02
and run the
following query. The command to connect to the first node is shown:
```bash
Connect to any node
docker exec -it clickhouse-01 clickhouse-client
```
If successful, you will see the ClickHouse client prompt:
response
cluster_1S_2R node 1 :)
Run the following query to check what cluster topologies are defined for which
hosts:
sql title="Query"
SELECT
cluster,
shard_num,
replica_num,
host_name,
port
FROM system.clusters;
response title="Response"
ββclusterββββββββ¬βshard_numββ¬βreplica_numββ¬βhost_nameββββββ¬βportββ
1. β cluster_1S_2R β 1 β 1 β clickhouse-01 β 9000 β
2. β cluster_1S_2R β 1 β 2 β clickhouse-02 β 9000 β
3. β default β 1 β 1 β localhost β 9000 β
βββββββββββββββββ΄ββββββββββββ΄ββββββββββββββ΄ββββββββββββββββ΄βββββββ
Run the following query to check the status of the ClickHouse Keeper cluster:
sql title="Query"
SELECT *
FROM system.zookeeper
WHERE path IN ('/', '/clickhouse')
response title="Response"
ββnameββββββββ¬βvalueββ¬βpathβββββββββ
1. β sessions β β /clickhouse β
2. β task_queue β β /clickhouse β
3. β keeper β β / β
4. β clickhouse β β / β
ββββββββββββββ΄ββββββββ΄ββββββββββββββ | {"source_file": "01_1_shard_2_replicas.md"} | [
0.013806043192744255,
-0.0034335097298026085,
-0.07721167802810669,
-0.025163227692246437,
0.02663521096110344,
-0.07519824802875519,
0.01741454377770424,
-0.033306483179330826,
-0.02359217032790184,
0.018202602863311768,
0.07166115939617157,
-0.01752799190580845,
0.014089713804423809,
-0.... |
83eaf2d1-ed16-4c40-9e23-208eec71291b | With this, you have successfully set up a ClickHouse cluster with a single shard and two replicas.
In the next step, you will create a table in the cluster.
Create a database {#creating-a-database}
Now that you have verified the cluster is correctly setup and is running, you
will be recreating the same table as the one used in the
UK property prices
example dataset tutorial. It consists of around 30 million rows of prices paid
for real-estate property in England and Wales since 1995.
Connect to the client of each host by running each of the following commands from separate terminal
tabs or windows:
bash
docker exec -it clickhouse-01 clickhouse-client
docker exec -it clickhouse-02 clickhouse-client
You can run the query below from clickhouse-client of each host to confirm that
there are no databases created yet, apart from the default ones:
sql title="Query"
SHOW DATABASES;
response title="Response"
ββnameββββββββββββββββ
1. β INFORMATION_SCHEMA β
2. β default β
3. β information_schema β
4. β system β
ββββββββββββββββββββββ
From the
clickhouse-01
client run the following
distributed
DDL query using the
ON CLUSTER
clause to create a new database called
uk
:
sql
CREATE DATABASE IF NOT EXISTS uk
-- highlight-next-line
ON CLUSTER cluster_1S_2R;
You can again run the same query as before from the client of each host
to confirm that the database has been created across the cluster despite running
the query only
clickhouse-01
:
sql
SHOW DATABASES;
```response
ββnameββββββββββββββββ
1. β INFORMATION_SCHEMA β
2. β default β
3. β information_schema β
4. β system β
highlight-next-line
β uk β
ββββββββββββββββββββββ
```
Create a table on the cluster {#creating-a-table}
Now that the database has been created, create a table on the cluster.
Run the following query from any of the host clients:
sql
CREATE TABLE IF NOT EXISTS uk.uk_price_paid_local
--highlight-next-line
ON CLUSTER cluster_1S_2R
(
price UInt32,
date Date,
postcode1 LowCardinality(String),
postcode2 LowCardinality(String),
type Enum8('terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4, 'other' = 0),
is_new UInt8,
duration Enum8('freehold' = 1, 'leasehold' = 2, 'unknown' = 0),
addr1 String,
addr2 String,
street LowCardinality(String),
locality LowCardinality(String),
town LowCardinality(String),
district LowCardinality(String),
county LowCardinality(String)
)
--highlight-next-line
ENGINE = ReplicatedMergeTree
ORDER BY (postcode1, postcode2, addr1, addr2);
Notice that it is identical to the query used in the original
CREATE
statement of the
UK property prices
example dataset tutorial,
except for the
ON CLUSTER
clause and use of the
ReplicatedMergeTree
engine. | {"source_file": "01_1_shard_2_replicas.md"} | [
0.10623123496770859,
-0.12267297506332397,
-0.031759489327669144,
0.02422121725976467,
-0.06947802752256393,
-0.05841623619198799,
-0.02950541488826275,
-0.07841972261667252,
-0.066134974360466,
0.02170439437031746,
-0.01440897025167942,
-0.11910367757081985,
0.10976772010326385,
-0.018378... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.