id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
0ce98b28-b258-415d-bb42-c5d0c0ec9451 | "rows": 17,
"statistics":
{
"elapsed": 0.173464376,
"rows_read": 0,
"bytes_read": 0
}
}
```
Format settings {#format-settings} | {"source_file": "JSONStrings.md"} | [
0.047941189259290695,
0.032633326947689056,
-0.11183501780033112,
0.03131226450204849,
-0.08304066210985184,
0.0014160892460495234,
-0.019439011812210083,
0.022873787209391594,
-0.01703198254108429,
0.06343473494052887,
-0.019358530640602112,
-0.04500260576605797,
0.011690597049891949,
-0.... |
4189a415-565e-4f1e-8cfe-442ef3f5ef16 | description: 'Documentation for the JSONCompactStringsEachRowWithNamesAndTypes format'
keywords: ['JSONCompactStringsEachRowWithNamesAndTypes']
slug: /interfaces/formats/JSONCompactStringsEachRowWithNamesAndTypes
title: 'JSONCompactStringsEachRowWithNamesAndTypes'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|-------|
| β | β | |
Description {#description}
Differs from
JSONCompactEachRow
format in that it also prints two header rows with column names and types, similar to
TabSeparatedWithNamesAndTypes
.
Example usage {#example-usage}
Inserting data {#inserting-data}
Using a JSON file with the following data, named as
football.json
:
json
["date", "season", "home_team", "away_team", "home_team_goals", "away_team_goals"]
["Date", "Int16", "LowCardinality(String)", "LowCardinality(String)", "Int8", "Int8"]
["2022-04-30", "2021", "Sutton United", "Bradford City", "1", "4"]
["2022-04-30", "2021", "Swindon Town", "Barrow", "2", "1"]
["2022-04-30", "2021", "Tranmere Rovers", "Oldham Athletic", "2", "0"]
["2022-05-02", "2021", "Port Vale", "Newport County", "1", "2"]
["2022-05-02", "2021", "Salford City", "Mansfield Town", "2", "2"]
["2022-05-07", "2021", "Barrow", "Northampton Town", "1", "3"]
["2022-05-07", "2021", "Bradford City", "Carlisle United", "2", "0"]
["2022-05-07", "2021", "Bristol Rovers", "Scunthorpe United", "7", "0"]
["2022-05-07", "2021", "Exeter City", "Port Vale", "0", "1"]
["2022-05-07", "2021", "Harrogate Town A.F.C.", "Sutton United", "0", "2"]
["2022-05-07", "2021", "Hartlepool United", "Colchester United", "0", "2"]
["2022-05-07", "2021", "Leyton Orient", "Tranmere Rovers", "0", "1"]
["2022-05-07", "2021", "Mansfield Town", "Forest Green Rovers", "2", "2"]
["2022-05-07", "2021", "Newport County", "Rochdale", "0", "2"]
["2022-05-07", "2021", "Oldham Athletic", "Crawley Town", "3", "3"]
["2022-05-07", "2021", "Stevenage Borough", "Salford City", "4", "2"]
["2022-05-07", "2021", "Walsall", "Swindon Town", "0", "3"]
Insert the data:
sql
INSERT INTO football FROM INFILE 'football.json' FORMAT JSONCompactStringsEachRowWithNamesAndTypes;
Reading data {#reading-data}
Read data using the
JSONCompactStringsEachRowWithNamesAndTypes
format:
sql
SELECT *
FROM football
FORMAT JSONCompactStringsEachRowWithNamesAndTypes
The output will be in JSON format: | {"source_file": "JSONCompactStringsEachRowWithNamesAndTypes.md"} | [
-0.03083072043955326,
0.014329548925161362,
-0.03820474073290825,
0.015235956758260727,
0.007642568554729223,
-0.011514711193740368,
-0.035218458622694016,
0.08183147013187408,
0.022032463923096657,
-0.0719437524676323,
0.014122681692242622,
-0.025116056203842163,
0.05135589838027954,
0.07... |
84e50b76-361a-4e0b-963f-f92fbc5e64ee | Read data using the
JSONCompactStringsEachRowWithNamesAndTypes
format:
sql
SELECT *
FROM football
FORMAT JSONCompactStringsEachRowWithNamesAndTypes
The output will be in JSON format:
json
["date", "season", "home_team", "away_team", "home_team_goals", "away_team_goals"]
["Date", "Int16", "LowCardinality(String)", "LowCardinality(String)", "Int8", "Int8"]
["2022-04-30", "2021", "Sutton United", "Bradford City", "1", "4"]
["2022-04-30", "2021", "Swindon Town", "Barrow", "2", "1"]
["2022-04-30", "2021", "Tranmere Rovers", "Oldham Athletic", "2", "0"]
["2022-05-02", "2021", "Port Vale", "Newport County", "1", "2"]
["2022-05-02", "2021", "Salford City", "Mansfield Town", "2", "2"]
["2022-05-07", "2021", "Barrow", "Northampton Town", "1", "3"]
["2022-05-07", "2021", "Bradford City", "Carlisle United", "2", "0"]
["2022-05-07", "2021", "Bristol Rovers", "Scunthorpe United", "7", "0"]
["2022-05-07", "2021", "Exeter City", "Port Vale", "0", "1"]
["2022-05-07", "2021", "Harrogate Town A.F.C.", "Sutton United", "0", "2"]
["2022-05-07", "2021", "Hartlepool United", "Colchester United", "0", "2"]
["2022-05-07", "2021", "Leyton Orient", "Tranmere Rovers", "0", "1"]
["2022-05-07", "2021", "Mansfield Town", "Forest Green Rovers", "2", "2"]
["2022-05-07", "2021", "Newport County", "Rochdale", "0", "2"]
["2022-05-07", "2021", "Oldham Athletic", "Crawley Town", "3", "3"]
["2022-05-07", "2021", "Stevenage Borough", "Salford City", "4", "2"]
["2022-05-07", "2021", "Walsall", "Swindon Town", "0", "3"]
Format settings {#format-settings}
:::note
If setting
input_format_with_names_use_header
is set to 1,
the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting
input_format_skip_unknown_fields
is set to 1.
Otherwise, the first row will be skipped.
:::
:::note
If setting
input_format_with_types_use_header
is set to 1,
the types from input data will be compared with the types of the corresponding columns from the table. Otherwise, the second row will be skipped.
::: | {"source_file": "JSONCompactStringsEachRowWithNamesAndTypes.md"} | [
0.057574767619371414,
-0.015655146911740303,
-0.00532038789242506,
0.012202052399516106,
0.020738888531923294,
0.029182009398937225,
0.019363459199666977,
0.05166031792759895,
-0.010413036681711674,
-0.021832602098584175,
-0.005043884739279747,
-0.0348445400595665,
0.00008733339927857742,
... |
d93d0399-1475-4a87-8cc4-3a439b22e1d8 | alias: []
description: 'Documentation for the JSONCompactColumns format'
input_format: true
keywords: ['JSONCompactColumns']
output_format: true
slug: /interfaces/formats/JSONCompactColumns
title: 'JSONCompactColumns'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|-------|
| β | β | |
Description {#description}
In this format, all data is represented as a single JSON Array.
:::note
The
JSONCompactColumns
output format buffers all data in memory to output it as a single block which can lead to high memory consumption.
:::
Example usage {#example-usage}
Inserting data {#inserting-data}
Using a JSON file with the following data, named as
football.json
:
json
[
["2022-04-30", "2022-04-30", "2022-04-30", "2022-05-02", "2022-05-02", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07"],
[2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021],
["Sutton United", "Swindon Town", "Tranmere Rovers", "Port Vale", "Salford City", "Barrow", "Bradford City", "Bristol Rovers", "Exeter City", "Harrogate Town A.F.C.", "Hartlepool United", "Leyton Orient", "Mansfield Town", "Newport County", "Oldham Athletic", "Stevenage Borough", "Walsall"],
["Bradford City", "Barrow", "Oldham Athletic", "Newport County", "Mansfield Town", "Northampton Town", "Carlisle United", "Scunthorpe United", "Port Vale", "Sutton United", "Colchester United", "Tranmere Rovers", "Forest Green Rovers", "Rochdale", "Crawley Town", "Salford City", "Swindon Town"],
[1, 2, 2, 1, 2, 1, 2, 7, 0, 0, 0, 0, 2, 0, 3, 4, 0],
[4, 1, 0, 2, 2, 3, 0, 0, 1, 2, 2, 1, 2, 2, 3, 2, 3]
]
Insert the data:
sql
INSERT INTO football FROM INFILE 'football.json' FORMAT JSONCompactColumns;
Reading data {#reading-data}
Read data using the
JSONCompactColumns
format:
sql
SELECT *
FROM football
FORMAT JSONCompactColumns
The output will be in JSON format: | {"source_file": "JSONCompactColumns.md"} | [
-0.0405922569334507,
0.03467800095677376,
-0.03304234892129898,
-0.0010346781928092241,
-0.020390426740050316,
-0.01140954252332449,
-0.024899279698729515,
0.08461062610149384,
0.049582213163375854,
-0.035013310611248016,
0.013891269452869892,
0.004676480777561665,
0.031096136197447777,
0.... |
66507d4f-7e63-452a-9de4-865301def9f2 | Reading data {#reading-data}
Read data using the
JSONCompactColumns
format:
sql
SELECT *
FROM football
FORMAT JSONCompactColumns
The output will be in JSON format:
json
[
["2022-04-30", "2022-04-30", "2022-04-30", "2022-05-02", "2022-05-02", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07", "2022-05-07"],
[2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021],
["Sutton United", "Swindon Town", "Tranmere Rovers", "Port Vale", "Salford City", "Barrow", "Bradford City", "Bristol Rovers", "Exeter City", "Harrogate Town A.F.C.", "Hartlepool United", "Leyton Orient", "Mansfield Town", "Newport County", "Oldham Athletic", "Stevenage Borough", "Walsall"],
["Bradford City", "Barrow", "Oldham Athletic", "Newport County", "Mansfield Town", "Northampton Town", "Carlisle United", "Scunthorpe United", "Port Vale", "Sutton United", "Colchester United", "Tranmere Rovers", "Forest Green Rovers", "Rochdale", "Crawley Town", "Salford City", "Swindon Town"],
[1, 2, 2, 1, 2, 1, 2, 7, 0, 0, 0, 0, 2, 0, 3, 4, 0],
[4, 1, 0, 2, 2, 3, 0, 0, 1, 2, 2, 1, 2, 2, 3, 2, 3]
]
Columns that are not present in the block will be filled with default values (you can use
input_format_defaults_for_omitted_fields
setting here)
Format settings {#format-settings} | {"source_file": "JSONCompactColumns.md"} | [
0.009301013313233852,
-0.031335823237895966,
-0.005826131906360388,
0.0005445829592645168,
-0.010370641015470028,
0.03848287835717201,
0.03378209099173546,
0.03655015304684639,
-0.019539762288331985,
-0.002123518381267786,
-0.0012990994146093726,
-0.012120221741497517,
-0.0035762181505560875... |
d3f9b06d-6b9b-4113-a669-5caa82be6ce1 | alias: []
description: 'Documentation for the Avro format'
input_format: true
keywords: ['Avro']
output_format: true
slug: /interfaces/formats/Avro
title: 'Avro'
doc_type: 'reference'
import DataTypeMapping from './_snippets/data-types-matching.md'
| Input | Output | Alias |
|-------|--------|-------|
| β | β | |
Description {#description}
Apache Avro
is a row-oriented serialization format that uses binary encoding for efficient data processing. The
Avro
format supports reading and writing
Avro data files
. This format expects self-describing messages with an embedded schema. If you're using Avro with a schema registry, refer to the
AvroConfluent
format.
Data type mapping {#data-type-mapping}
Format settings {#format-settings}
| Setting | Description | Default |
|---------------------------------------------|-----------------------------------------------------------------------------------------------------|---------|
|
input_format_avro_allow_missing_fields
| Whether to use a default value instead of throwing an error when a field is not found in the schema. |
0
|
|
input_format_avro_null_as_default
| Whether to use a default value instead of throwing an error when inserting a
null
value into a non-nullable column. |
0
|
|
output_format_avro_codec
| Compression algorithm for Avro output files. Possible values:
null
,
deflate
,
snappy
,
zstd
. | |
|
output_format_avro_sync_interval
| Sync marker frequency in Avro files (in bytes). |
16384
|
|
output_format_avro_string_column_pattern
| Regular expression to identify
String
columns for Avro string type mapping. By default, ClickHouse
String
columns are written as Avro
bytes
type. | |
|
output_format_avro_rows_in_file
| Maximum number of rows per Avro output file. When this limit is reached, a new file is created (if the storage system supports file splitting). |
1
|
Examples {#examples}
Reading Avro data {#reading-avro-data}
To read data from an Avro file into a ClickHouse table:
bash
$ cat file.avro | clickhouse-client --query="INSERT INTO {some_table} FORMAT Avro"
The root schema of the ingested Avro file must be of type
record
.
To find the correspondence between table columns and fields of Avro schema, ClickHouse compares their names.
This comparison is case-sensitive and unused fields are skipped.
Data types of ClickHouse table columns can differ from the corresponding fields of the Avro data inserted. When inserting data, ClickHouse interprets data types according to the table above and then
casts
the data to the corresponding column type. | {"source_file": "Avro.md"} | [
0.04357672855257988,
-0.04371758922934532,
-0.06267400830984116,
-0.03146364167332649,
-0.012688903138041496,
-0.004190422128885984,
0.025749867781996727,
0.06696341931819916,
-0.029333151876926422,
0.01050236914306879,
0.0030550737865269184,
-0.0611189529299736,
-0.03314445912837982,
0.02... |
02216027-28f8-488c-9f26-fb33ed63ab2b | While importing data, when a field is not found in the schema and setting
input_format_avro_allow_missing_fields
is enabled, the default value will be used instead of throwing an error.
Writing Avro data {#writing-avro-data}
To write data from a ClickHouse table into an Avro file:
bash
$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Avro" > file.avro
Column names must:
Start with
[A-Za-z_]
Be followed by only
[A-Za-z0-9_]
The output compression and sync interval for Avro files can be configured using the
output_format_avro_codec
and
output_format_avro_sync_interval
settings, respectively.
Inferring the Avro schema {#inferring-the-avro-schema}
Using the ClickHouse
DESCRIBE
function, you can quickly view the inferred format of an Avro file like the following example.
This example includes the URL of a publicly accessible Avro file in the ClickHouse S3 public bucket:
```sql
DESCRIBE url('https://clickhouse-public-datasets.s3.eu-central-1.amazonaws.com/hits.avro','Avro); | {"source_file": "Avro.md"} | [
0.029858993366360664,
-0.050816670060157776,
-0.14482936263084412,
0.005479251500219107,
0.026309488341212273,
0.0024030867498368025,
-0.011644992977380753,
-0.03895394876599312,
-0.026634415611624718,
0.04092520847916603,
0.022360118106007576,
-0.048766396939754486,
0.016311265528202057,
... |
6b8730a3-19b7-4484-a07d-0cfd52c57229 | ```sql
DESCRIBE url('https://clickhouse-public-datasets.s3.eu-central-1.amazonaws.com/hits.avro','Avro);
ββnameββββββββββββββββββββββββ¬βtypeβββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β WatchID β Int64 β β β β β β
β JavaEnable β Int32 β β β β β β
β Title β String β β β β β β
β GoodEvent β Int32 β β β β β β
β EventTime β Int32 β β β β β β
β EventDate β Date32 β β β β β β
β CounterID β Int32 β β β β β β
β ClientIP β Int32 β β β β β β
β ClientIP6 β FixedString(16) β β β β β β
β RegionID β Int32 β β β β β β
...
β IslandID β FixedString(16) β β β β β β
β RequestNum β Int32 β β β β β β
β RequestTry β Int32 β β β β β β
ββββββββββββββββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
``` | {"source_file": "Avro.md"} | [
0.05654896795749664,
-0.09716446697711945,
-0.05489453673362732,
0.0425332672894001,
0.038833137601614,
-0.037218235433101654,
0.010292300023138523,
-0.06335128098726273,
-0.012147990986704826,
0.0675756186246872,
0.016103776171803474,
-0.07723362743854523,
0.03569736331701279,
-0.09397272... |
5bb700e6-9d5b-4aad-83e4-543d68f087bd | alias: []
description: 'Documentation for the AvroConfluent format'
input_format: true
keywords: ['AvroConfluent']
output_format: false
slug: /interfaces/formats/AvroConfluent
title: 'AvroConfluent'
doc_type: 'reference'
import DataTypesMatching from './_snippets/data-types-matching.md'
| Input | Output | Alias |
|-------|--------|-------|
| β | β | |
Description {#description}
Apache Avro
is a row-oriented serialization format that uses binary encoding for efficient data processing. The
AvroConfluent
format supports decoding single-object, Avro-encoded Kafka messages serialized using the
Confluent Schema Registry
(or API-compatible services).
Each Avro message embeds a schema ID that ClickHouse automatically resolves by querying the configured schema registry. Once resolved, schemas are cached for optimal performance.
Data type mapping {#data-type-mapping}
Format settings {#format-settings}
| Setting | Description | Default |
|---------------------------------------------|-----------------------------------------------------------------------------------------------------|---------|
|
input_format_avro_allow_missing_fields
| Whether to use a default value instead of throwing an error when a field is not found in the schema. |
0
|
|
input_format_avro_null_as_default
| Whether to use a default value instead of throwing an error when inserting a
null
value into a non-nullable column. |
0
|
|
format_avro_schema_registry_url
| The Confluent Schema Registry URL. For basic authentication, URL-encoded credentials can be included directly in the URL path. | |
Examples {#examples}
Using a schema registry {#using-a-schema-registry}
To read an Avro-encoded Kafka topic using the
Kafka table engine
, use the
format_avro_schema_registry_url
setting to provide the URL of the schema registry.
```sql
CREATE TABLE topic1_stream
(
field1 String,
field2 String
)
ENGINE = Kafka()
SETTINGS
kafka_broker_list = 'kafka-broker',
kafka_topic_list = 'topic1',
kafka_group_name = 'group1',
kafka_format = 'AvroConfluent',
format_avro_schema_registry_url = 'http://schema-registry-url';
SELECT * FROM topic1_stream;
```
Using basic authentication {#using-basic-authentication}
If your schema registry requires basic authentication (e.g., if you're using Confluent Cloud), you can provide URL-encoded credentials in the
format_avro_schema_registry_url
setting.
sql
CREATE TABLE topic1_stream
(
field1 String,
field2 String
)
ENGINE = Kafka()
SETTINGS
kafka_broker_list = 'kafka-broker',
kafka_topic_list = 'topic1',
kafka_group_name = 'group1',
kafka_format = 'AvroConfluent',
format_avro_schema_registry_url = 'https://<username>:<password>@schema-registry-url';
Troubleshooting {#troubleshooting} | {"source_file": "AvroConfluent.md"} | [
-0.019558757543563843,
-0.0604296550154686,
-0.10649111866950989,
-0.03617514669895172,
-0.023513127118349075,
-0.01700807921588421,
0.016124222427606583,
0.019034067168831825,
0.012075664475560188,
0.020735803991556168,
-0.00994251761585474,
-0.058543961495161057,
-0.03622861206531525,
-0... |
bbe756f9-dc91-49af-a092-6321d50daa4b | Troubleshooting {#troubleshooting}
To monitor ingestion progress and debug errors with the Kafka consumer, you can query the
system.kafka_consumers
system table
. If your deployment has multiple replicas (e.g., ClickHouse Cloud), you must use the
clusterAllReplicas
table function.
sql
SELECT * FROM clusterAllReplicas('default',system.kafka_consumers)
ORDER BY assignments.partition_id ASC;
If you run into schema resolution issues, you can use
kafkacat
with
clickhouse-local
to troubleshoot:
bash
$ kafkacat -b kafka-broker -C -t topic1 -o beginning -f '%s' -c 3 | clickhouse-local --input-format AvroConfluent --format_avro_schema_registry_url 'http://schema-registry' -S "field1 Int64, field2 String" -q 'select * from table'
1 a
2 b
3 c | {"source_file": "AvroConfluent.md"} | [
0.06657124310731888,
-0.08652806282043457,
-0.050335731357336044,
0.043226975947618484,
0.0010486328974366188,
-0.018767377361655235,
-0.042212601751089096,
-0.04187259078025818,
-0.009346861392259598,
0.059073470532894135,
0.011440379545092583,
-0.12414466589689255,
-0.027658239006996155,
... |
7f5e8110-0be8-4b00-96e2-b18e1b0ca904 | alias: []
description: 'Documentation for the CSVWithNamesAndTypes format'
input_format: true
keywords: ['CSVWithNamesAndTypes']
output_format: true
slug: /interfaces/formats/CSVWithNamesAndTypes
title: 'CSVWithNamesAndTypes'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|-------|
| β | β | |
Description {#description}
Also prints two header rows with column names and types, similar to
TabSeparatedWithNamesAndTypes
.
Example usage {#example-usage}
Inserting data {#inserting-data}
:::tip
Starting from
version
23.1, ClickHouse will automatically detect headers in CSV files when using the
CSV
format, so it is not necessary to use
CSVWithNames
or
CSVWithNamesAndTypes
.
:::
Using the following CSV file, named as
football_types.csv
:
csv
date,season,home_team,away_team,home_team_goals,away_team_goals
Date,Int16,LowCardinality(String),LowCardinality(String),Int8,Int8
2022-04-30,2021,Sutton United,Bradford City,1,4
2022-04-30,2021,Swindon Town,Barrow,2,1
2022-04-30,2021,Tranmere Rovers,Oldham Athletic,2,0
2022-05-02,2021,Salford City,Mansfield Town,2,2
2022-05-02,2021,Port Vale,Newport County,1,2
2022-05-07,2021,Barrow,Northampton Town,1,3
2022-05-07,2021,Bradford City,Carlisle United,2,0
2022-05-07,2021,Bristol Rovers,Scunthorpe United,7,0
2022-05-07,2021,Exeter City,Port Vale,0,1
2022-05-07,2021,Harrogate Town A.F.C.,Sutton United,0,2
2022-05-07,2021,Hartlepool United,Colchester United,0,2
2022-05-07,2021,Leyton Orient,Tranmere Rovers,0,1
2022-05-07,2021,Mansfield Town,Forest Green Rovers,2,2
2022-05-07,2021,Newport County,Rochdale,0,2
2022-05-07,2021,Oldham Athletic,Crawley Town,3,3
2022-05-07,2021,Stevenage Borough,Salford City,4,2
2022-05-07,2021,Walsall,Swindon Town,0,3
Create a table:
sql
CREATE TABLE football
(
`date` Date,
`season` Int16,
`home_team` LowCardinality(String),
`away_team` LowCardinality(String),
`home_team_goals` Int8,
`away_team_goals` Int8
)
ENGINE = MergeTree
ORDER BY (date, home_team);
Insert data using the
CSVWithNamesAndTypes
format:
sql
INSERT INTO football FROM INFILE 'football_types.csv' FORMAT CSVWithNamesAndTypes;
Reading data {#reading-data}
Read data using the
CSVWithNamesAndTypes
format:
sql
SELECT *
FROM football
FORMAT CSVWithNamesAndTypes
The output will be a CSV with a two header rows for column names and types: | {"source_file": "CSVWithNamesAndTypes.md"} | [
-0.03579428046941757,
0.03562333807349205,
-0.05977243930101395,
0.01972321979701519,
0.041646238416433334,
-0.027030207216739655,
0.05461593717336655,
0.03349374607205391,
0.02397521771490574,
0.006494744680821896,
0.0016965289833024144,
-0.07844153046607971,
0.1099371388554573,
-0.002168... |
e14d8f7f-4610-46ff-9586-2abc1007ba24 | Read data using the
CSVWithNamesAndTypes
format:
sql
SELECT *
FROM football
FORMAT CSVWithNamesAndTypes
The output will be a CSV with a two header rows for column names and types:
csv
"date","season","home_team","away_team","home_team_goals","away_team_goals"
"Date","Int16","LowCardinality(String)","LowCardinality(String)","Int8","Int8"
"2022-04-30",2021,"Sutton United","Bradford City",1,4
"2022-04-30",2021,"Swindon Town","Barrow",2,1
"2022-04-30",2021,"Tranmere Rovers","Oldham Athletic",2,0
"2022-05-02",2021,"Port Vale","Newport County",1,2
"2022-05-02",2021,"Salford City","Mansfield Town",2,2
"2022-05-07",2021,"Barrow","Northampton Town",1,3
"2022-05-07",2021,"Bradford City","Carlisle United",2,0
"2022-05-07",2021,"Bristol Rovers","Scunthorpe United",7,0
"2022-05-07",2021,"Exeter City","Port Vale",0,1
"2022-05-07",2021,"Harrogate Town A.F.C.","Sutton United",0,2
"2022-05-07",2021,"Hartlepool United","Colchester United",0,2
"2022-05-07",2021,"Leyton Orient","Tranmere Rovers",0,1
"2022-05-07",2021,"Mansfield Town","Forest Green Rovers",2,2
"2022-05-07",2021,"Newport County","Rochdale",0,2
"2022-05-07",2021,"Oldham Athletic","Crawley Town",3,3
"2022-05-07",2021,"Stevenage Borough","Salford City",4,2
"2022-05-07",2021,"Walsall","Swindon Town",0,3
Format settings {#format-settings}
:::note
If setting
input_format_with_names_use_header
is set to
1
,
the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting
input_format_skip_unknown_fields
is set to
1
.
Otherwise, the first row will be skipped.
:::
:::note
If setting
input_format_with_types_use_header
is set to
1
,
the types from input data will be compared with the types of the corresponding columns from the table. Otherwise, the second row will be skipped.
::: | {"source_file": "CSVWithNamesAndTypes.md"} | [
0.0700487419962883,
0.0035050080623477697,
-0.06651972234249115,
0.03031042031943798,
0.05863025039434433,
0.04652600362896919,
0.023963266983628273,
0.03816170245409012,
-0.006130460184067488,
0.021753260865807533,
-0.009910441935062408,
-0.10961847007274628,
0.04541918635368347,
-0.01552... |
d7fee0c7-030c-4d8c-b4b5-4ab2187a68cb | alias: []
description: 'Documentation for the CSV format'
input_format: true
keywords: ['CSV']
output_format: true
slug: /interfaces/formats/CSV
title: 'CSV'
doc_type: 'reference'
Description {#description}
Comma Separated Values format (
RFC
).
When formatting, rows are enclosed in double quotes. A double quote inside a string is output as two double quotes in a row.
There are no other rules for escaping characters.
Date and date-time are enclosed in double quotes.
Numbers are output without quotes.
Values are separated by a delimiter character, which is
,
by default. The delimiter character is defined in the setting
format_csv_delimiter
.
Rows are separated using the Unix line feed (LF).
Arrays are serialized in CSV as follows:
first, the array is serialized to a string as in TabSeparated format
The resulting string is output to CSV in double quotes.
Tuples in CSV format are serialized as separate columns (that is, their nesting in the tuple is lost).
bash
$ clickhouse-client --format_csv_delimiter="|" --query="INSERT INTO test.csv FORMAT CSV" < data.csv
:::note
By default, the delimiter is
,
See the
format_csv_delimiter
setting for more information.
:::
When parsing, all values can be parsed either with or without quotes. Both double and single quotes are supported.
Rows can also be arranged without quotes. In this case, they are parsed up to the delimiter character or line feed (CR or LF).
However, in violation of the RFC, when parsing rows without quotes, the leading and trailing spaces and tabs are ignored.
The line feed supports: Unix (LF), Windows (CR LF) and Mac OS Classic (CR LF) types.
NULL
is formatted according to setting
format_csv_null_representation
(the default value is
\N
).
In the input data,
ENUM
values can be represented as names or as ids.
First, we try to match the input value to the ENUM name.
If we fail and the input value is a number, we try to match this number to the ENUM id.
If input data contains only ENUM ids, it's recommended to enable the setting
input_format_csv_enum_as_number
to optimize
ENUM
parsing.
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "CSV.md"} | [
-0.003106512362137437,
0.04977600276470184,
-0.03380446508526802,
-0.016768330708146095,
-0.01494180504232645,
-0.0011048789601773024,
0.03355802223086357,
0.020316945388913155,
0.07032322138547897,
0.03256536275148392,
-0.03536951541900635,
-0.05803614854812622,
0.07517368346452713,
-0.03... |
7b553f4a-f4c7-4cf3-b32f-9dfdd2cb64b7 | Example usage {#example-usage}
Format settings {#format-settings}
| Setting | Description | Default | Notes |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
format_csv_delimiter
| the character to be considered as a delimiter in CSV data. |
,
| |
|
format_csv_allow_single_quotes
| allow strings in single quotes. |
true
| |
|
format_csv_allow_double_quotes
| allow strings in double quotes. |
true
| |
|
format_csv_null_representation
| custom NULL representation in CSV format. |
\N
| | | {"source_file": "CSV.md"} | [
0.015382125042378902,
0.02221941389143467,
-0.057884473353624344,
0.05155569314956665,
-0.08083784580230713,
0.09154631197452545,
0.004420081153512001,
0.0776667520403862,
0.003745294874534011,
-0.011943661607801914,
0.036205265671014786,
-0.025225363671779633,
0.01674126647412777,
-0.0495... |
3f542891-00b1-439b-8486-b5a8701255e1 | |
input_format_csv_empty_as_default
| treat empty fields in CSV input as default values. |
true
| For complex default expressions,
input_format_defaults_for_omitted_fields
must be enabled too. |
|
input_format_csv_enum_as_number
| treat inserted enum values in CSV formats as enum indices. |
false
| |
|
input_format_csv_use_best_effort_in_schema_inference
| use some tweaks and heuristics to infer schema in CSV format. If disabled, all fields will be inferred as Strings. |
true
| |
|
input_format_csv_arrays_as_nested_csv
| when reading Array from CSV, expect that its elements were serialized in nested CSV and then put into string. |
false
| |
|
output_format_csv_crlf_end_of_line
| if it is set to true, end of line in CSV output format will be
\r\n
instead of
\n
. |
false
| |
|
input_format_csv_skip_first_lines
| skip the specified number of lines at the beginning of data. |
0
| |
|
input_format_csv_detect_header
| automatically detect header with names and types in CSV format. |
true
| |
|
input_format_csv_skip_trailing_empty_lines
| skip trailing empty lines at the end of data. |
false | {"source_file": "CSV.md"} | [
0.04208246245980263,
0.03906084969639778,
-0.039419084787368774,
0.02002902701497078,
-0.017162494361400604,
-0.0007354990229941905,
-0.08521930128335953,
-0.003766648005694151,
-0.015682702884078026,
0.03235260769724846,
0.017968622967600822,
-0.08403527736663818,
0.06208124756813049,
-0.... |
8abd2d4e-2dbe-46ea-97bc-bec332c4d163 | |
input_format_csv_skip_trailing_empty_lines
| skip trailing empty lines at the end of data. |
false
| |
|
input_format_csv_trim_whitespaces
| trim spaces and tabs in non-quoted CSV strings. |
true
| |
|
input_format_csv_allow_whitespace_or_tab_as_delimiter
| Allow to use whitespace or tab as field delimiter in CSV strings. |
false
| |
|
input_format_csv_allow_variable_number_of_columns
| allow variable number of columns in CSV format, ignore extra columns and use default values on missing columns. |
false
| |
|
input_format_csv_use_default_on_bad_values
| Allow to set default value to column when CSV field deserialization failed on bad value. |
false
| |
|
input_format_csv_try_infer_numbers_from_strings
| Try to infer numbers from string fields while schema inference. |
false
| | | {"source_file": "CSV.md"} | [
0.013597995042800903,
0.04165821149945259,
-0.07419677823781967,
0.027979927137494087,
-0.034316811710596085,
-0.010484392754733562,
-0.05736544355750084,
0.022080469876527786,
-0.028121342882514,
0.017501477152109146,
0.04551549255847931,
-0.07295884191989899,
0.05963817611336708,
-0.0531... |
5f8eef8f-3cde-44d1-aad7-34768f8377f2 | alias: []
description: 'Documentation for the CSV format'
input_format: true
keywords: ['CSVWithNames']
output_format: true
slug: /interfaces/formats/CSVWithNames
title: 'CSVWithNames'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|-------|
| β | β | |
Description {#description}
Also prints the header row with column names, similar to
TabSeparatedWithNames
.
Example usage {#example-usage}
Inserting data {#inserting-data}
:::tip
Starting from
version
23.1, ClickHouse will automatically detect headers in CSV files when using the
CSV
format, so it is not necessary to use
CSVWithNames
or
CSVWithNamesAndTypes
.
:::
Using the following CSV file, named as
football.csv
:
csv
date,season,home_team,away_team,home_team_goals,away_team_goals
2022-04-30,2021,Sutton United,Bradford City,1,4
2022-04-30,2021,Swindon Town,Barrow,2,1
2022-04-30,2021,Tranmere Rovers,Oldham Athletic,2,0
2022-05-02,2021,Salford City,Mansfield Town,2,2
2022-05-02,2021,Port Vale,Newport County,1,2
2022-05-07,2021,Barrow,Northampton Town,1,3
2022-05-07,2021,Bradford City,Carlisle United,2,0
2022-05-07,2021,Bristol Rovers,Scunthorpe United,7,0
2022-05-07,2021,Exeter City,Port Vale,0,1
2022-05-07,2021,Harrogate Town A.F.C.,Sutton United,0,2
2022-05-07,2021,Hartlepool United,Colchester United,0,2
2022-05-07,2021,Leyton Orient,Tranmere Rovers,0,1
2022-05-07,2021,Mansfield Town,Forest Green Rovers,2,2
2022-05-07,2021,Newport County,Rochdale,0,2
2022-05-07,2021,Oldham Athletic,Crawley Town,3,3
2022-05-07,2021,Stevenage Borough,Salford City,4,2
2022-05-07,2021,Walsall,Swindon Town,0,3
Create a table:
sql
CREATE TABLE football
(
`date` Date,
`season` Int16,
`home_team` LowCardinality(String),
`away_team` LowCardinality(String),
`home_team_goals` Int8,
`away_team_goals` Int8
)
ENGINE = MergeTree
ORDER BY (date, home_team);
Insert data using the
CSVWithNames
format:
sql
INSERT INTO football FROM INFILE 'football.csv' FORMAT CSVWithNames;
Reading data {#reading-data}
Read data using the
CSVWithNames
format:
sql
SELECT *
FROM football
FORMAT CSVWithNames
The output will be a CSV with a single header row: | {"source_file": "CSVWithNames.md"} | [
-0.03620915487408638,
0.04199904948472977,
-0.07722250372171402,
0.011502581648528576,
0.03565335273742676,
-0.016604728996753693,
0.03050547093153,
0.041138906031847,
0.02488892339169979,
-0.00007409212412312627,
0.012779229320585728,
-0.07902421057224274,
0.10103950649499893,
-0.02109524... |
10cd033e-7726-4467-9649-00a12577435b | Reading data {#reading-data}
Read data using the
CSVWithNames
format:
sql
SELECT *
FROM football
FORMAT CSVWithNames
The output will be a CSV with a single header row:
csv
"date","season","home_team","away_team","home_team_goals","away_team_goals"
"2022-04-30",2021,"Sutton United","Bradford City",1,4
"2022-04-30",2021,"Swindon Town","Barrow",2,1
"2022-04-30",2021,"Tranmere Rovers","Oldham Athletic",2,0
"2022-05-02",2021,"Port Vale","Newport County",1,2
"2022-05-02",2021,"Salford City","Mansfield Town",2,2
"2022-05-07",2021,"Barrow","Northampton Town",1,3
"2022-05-07",2021,"Bradford City","Carlisle United",2,0
"2022-05-07",2021,"Bristol Rovers","Scunthorpe United",7,0
"2022-05-07",2021,"Exeter City","Port Vale",0,1
"2022-05-07",2021,"Harrogate Town A.F.C.","Sutton United",0,2
"2022-05-07",2021,"Hartlepool United","Colchester United",0,2
"2022-05-07",2021,"Leyton Orient","Tranmere Rovers",0,1
"2022-05-07",2021,"Mansfield Town","Forest Green Rovers",2,2
"2022-05-07",2021,"Newport County","Rochdale",0,2
"2022-05-07",2021,"Oldham Athletic","Crawley Town",3,3
"2022-05-07",2021,"Stevenage Borough","Salford City",4,2
"2022-05-07",2021,"Walsall","Swindon Town",0,3
Format settings {#format-settings}
:::note
If setting
input_format_with_names_use_header
is set to
1
,
the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting
input_format_skip_unknown_fields
is set to
1
.
Otherwise, the first row will be skipped.
::: | {"source_file": "CSVWithNames.md"} | [
0.04872334003448486,
0.011119124479591846,
-0.04635116085410118,
0.02888874150812626,
0.07296832650899887,
0.04065825045108795,
0.02692205272614956,
0.0041146427392959595,
-0.010785926133394241,
0.03964579105377197,
-0.009020661003887653,
-0.09776542335748672,
0.055971063673496246,
-0.0366... |
94d46902-68a9-47c4-bce9-22dac7eb4d21 | alias: []
description: 'Documentation for the TSKV format'
input_format: true
keywords: ['TSKV']
output_format: true
slug: /interfaces/formats/TSKV
title: 'TSKV'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|-------|
| β | β | |
Description {#description}
Similar to the
TabSeparated
format, but outputs a value in
name=value
format.
Names are escaped the same way as in the
TabSeparated
format, and the
=
symbol is also escaped.
text
SearchPhrase= count()=8267016
SearchPhrase=bathroom interior design count()=2166
SearchPhrase=clickhouse count()=1655
SearchPhrase=2014 spring fashion count()=1549
SearchPhrase=freeform photos count()=1480
SearchPhrase=angelina jolie count()=1245
SearchPhrase=omsk count()=1112
SearchPhrase=photos of dog breeds count()=1091
SearchPhrase=curtain designs count()=1064
SearchPhrase=baku count()=1000
sql title="Query"
SELECT * FROM t_null FORMAT TSKV
text title="Response"
x=1 y=\N
:::note
When there are a large number of small columns, this format is ineffective, and there is generally no reason to use it.
Nevertheless, it is no worse than the
JSONEachRow
format in terms of efficiency.
:::
For parsing, any order is supported for the values of the different columns.
It is acceptable for some values to be omitted as they are treated as equal to their default values.
In this case, zeros and blank rows are used as default values.
Complex values that could be specified in the table are not supported as defaults.
Parsing allows an additional field
tskv
to be added without the equal sign or a value. This field is ignored.
During import, columns with unknown names will be skipped,
if setting
input_format_skip_unknown_fields
is set to
1
.
NULL
is formatted as
\N
.
Example usage {#example-usage}
Inserting data {#inserting-data}
Using the following tskv file, named as
football.tskv
: | {"source_file": "TSKV.md"} | [
-0.015885580331087112,
0.06555791944265366,
0.025718603283166885,
0.04154849424958229,
-0.01566539704799652,
-0.017842456698417664,
0.08265774697065353,
0.07043655961751938,
0.05595609173178673,
0.010740905068814754,
-0.007885739207267761,
-0.08463167399168015,
0.11781919002532959,
0.00182... |
9cd736f0-d018-402a-a157-5996ece43f37 | NULL
is formatted as
\N
.
Example usage {#example-usage}
Inserting data {#inserting-data}
Using the following tskv file, named as
football.tskv
:
tsv
date=2022-04-30 season=2021 home_team=Sutton United away_team=Bradford City home_team_goals=1 away_team_goals=4
date=2022-04-30 season=2021 home_team=Swindon Town away_team=Barrow home_team_goals=2 away_team_goals=1
date=2022-04-30 season=2021 home_team=Tranmere Rovers away_team=Oldham Athletic home_team_goals=2 away_team_goals=0
date=2022-05-02 season=2021 home_team=Port Vale away_team=Newport County home_team_goals=1 away_team_goals=2
date=2022-05-02 season=2021 home_team=Salford City away_team=Mansfield Town home_team_goals=2 away_team_goals=2
date=2022-05-07 season=2021 home_team=Barrow away_team=Northampton Town home_team_goals=1 away_team_goals=3
date=2022-05-07 season=2021 home_team=Bradford City away_team=Carlisle United home_team_goals=2 away_team_goals=0
date=2022-05-07 season=2021 home_team=Bristol Rovers away_team=Scunthorpe United home_team_goals=7 away_team_goals=0
date=2022-05-07 season=2021 home_team=Exeter City away_team=Port Vale home_team_goals=0 away_team_goals=1
date=2022-05-07 season=2021 home_team=Harrogate Town A.F.C. away_team=Sutton United home_team_goals=0 away_team_goals=2
date=2022-05-07 season=2021 home_team=Hartlepool United away_team=Colchester United home_team_goals=0 away_team_goals=2
date=2022-05-07 season=2021 home_team=Leyton Orient away_team=Tranmere Rovers home_team_goals=0 away_team_goals=1
date=2022-05-07 season=2021 home_team=Mansfield Town away_team=Forest Green Rovers home_team_goals=2 away_team_goals=2
date=2022-05-07 season=2021 home_team=Newport County away_team=Rochdale home_team_goals=0 away_team_goals=2
date=2022-05-07 season=2021 home_team=Oldham Athletic away_team=Crawley Town home_team_goals=3 away_team_goals=3
date=2022-05-07 season=2021 home_team=Stevenage Borough away_team=Salford City home_team_goals=4 away_team_goals=2
date=2022-05-07 season=2021 home_team=Walsall away_team=Swindon Town home_team_goals=0 away_team_goals=3
Insert the data:
sql
INSERT INTO football FROM INFILE 'football.tskv' FORMAT TSKV;
Reading data {#reading-data}
Read data using the
TSKV
format:
sql
SELECT *
FROM football
FORMAT TSKV
The output will be in tab separated format with two header rows for column names and types: | {"source_file": "TSKV.md"} | [
0.0325501374900341,
0.02504613995552063,
-0.07744547724723816,
0.01008068211376667,
0.032826296985149384,
0.030976371839642525,
-0.001703549176454544,
0.033390164375305176,
-0.04790377989411354,
0.05467492714524269,
0.07896255701780319,
-0.1280367076396942,
0.05439041927456856,
-0.07002448... |
1ad953ee-c610-4b82-ba9f-4e31208379ef | Read data using the
TSKV
format:
sql
SELECT *
FROM football
FORMAT TSKV
The output will be in tab separated format with two header rows for column names and types:
tsv
date=2022-04-30 season=2021 home_team=Sutton United away_team=Bradford City home_team_goals=1 away_team_goals=4
date=2022-04-30 season=2021 home_team=Swindon Town away_team=Barrow home_team_goals=2 away_team_goals=1
date=2022-04-30 season=2021 home_team=Tranmere Rovers away_team=Oldham Athletic home_team_goals=2 away_team_goals=0
date=2022-05-02 season=2021 home_team=Port Vale away_team=Newport County home_team_goals=1 away_team_goals=2
date=2022-05-02 season=2021 home_team=Salford City away_team=Mansfield Town home_team_goals=2 away_team_goals=2
date=2022-05-07 season=2021 home_team=Barrow away_team=Northampton Town home_team_goals=1 away_team_goals=3
date=2022-05-07 season=2021 home_team=Bradford City away_team=Carlisle United home_team_goals=2 away_team_goals=0
date=2022-05-07 season=2021 home_team=Bristol Rovers away_team=Scunthorpe United home_team_goals=7 away_team_goals=0
date=2022-05-07 season=2021 home_team=Exeter City away_team=Port Vale home_team_goals=0 away_team_goals=1
date=2022-05-07 season=2021 home_team=Harrogate Town A.F.C. away_team=Sutton United home_team_goals=0 away_team_goals=2
date=2022-05-07 season=2021 home_team=Hartlepool United away_team=Colchester United home_team_goals=0 away_team_goals=2
date=2022-05-07 season=2021 home_team=Leyton Orient away_team=Tranmere Rovers home_team_goals=0 away_team_goals=1
date=2022-05-07 season=2021 home_team=Mansfield Town away_team=Forest Green Rovers home_team_goals=2 away_team_goals=2
date=2022-05-07 season=2021 home_team=Newport County away_team=Rochdale home_team_goals=0 away_team_goals=2
date=2022-05-07 season=2021 home_team=Oldham Athletic away_team=Crawley Town home_team_goals=3 away_team_goals=3
date=2022-05-07 season=2021 home_team=Stevenage Borough away_team=Salford City home_team_goals=4 away_team_goals=2
date=2022-05-07 season=2021 home_team=Walsall away_team=Swindon Town home_team_goals=0 away_team_goals=3
Format settings {#format-settings} | {"source_file": "TSKV.md"} | [
0.06526432931423187,
-0.023828642442822456,
-0.049896981567144394,
0.03720014542341232,
0.0693860575556755,
0.048161622136831284,
0.019562626257538795,
0.00921114906668663,
-0.04006519913673401,
0.013391576707363129,
0.06197225674986839,
-0.11053962260484695,
0.048570603132247925,
-0.04489... |
74f5d2ca-b3e1-4ae8-8cac-21db28520a6a | alias: ['TSVRaw', 'Raw']
description: 'Documentation for the TabSeparatedRaw format'
input_format: true
keywords: ['TabSeparatedRaw']
output_format: true
slug: /interfaces/formats/TabSeparatedRaw
title: 'TabSeparatedRaw'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|-----------------|
| β | β |
TSVRaw
,
Raw
|
Description {#description}
Differs from the
TabSeparated
format in that rows are written without escaping.
:::note
When parsing with this format, tabs or line-feeds are not allowed in each field.
:::
For a comparison of the
TabSeparatedRaw
format and the
RawBlob
format see:
Raw Formats Comparison
Example usage {#example-usage}
Inserting data {#inserting-data}
Using the following tsv file, named as
football.tsv
:
tsv
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Insert the data:
sql
INSERT INTO football FROM INFILE 'football.tsv' FORMAT TabSeparatedRaw;
Reading data {#reading-data}
Read data using the
TabSeparatedRaw
format:
sql
SELECT *
FROM football
FORMAT TabSeparatedRaw
The output will be in tab separated format: | {"source_file": "TabSeparatedRaw.md"} | [
-0.040032874792814255,
-0.002726203529164195,
-0.006248161196708679,
-0.017442340031266212,
-0.020908545702695847,
-0.007889344356954098,
0.015577474609017372,
0.06820666044950485,
-0.023485632613301277,
0.05468008667230606,
-0.01608917862176895,
-0.053411029279232025,
0.04552673175930977,
... |
20f4efec-6c8d-4840-9cc8-9d67e4ca5eec | Reading data {#reading-data}
Read data using the
TabSeparatedRaw
format:
sql
SELECT *
FROM football
FORMAT TabSeparatedRaw
The output will be in tab separated format:
tsv
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Format settings {#format-settings} | {"source_file": "TabSeparatedRaw.md"} | [
0.06875907629728317,
-0.07596739381551743,
-0.02583645097911358,
0.005053529050201178,
0.023714367300271988,
0.03548046946525574,
0.04172297194600105,
-0.013808182440698147,
-0.08768730610609055,
0.05574776977300644,
0.034308504313230515,
-0.05458635836839676,
-0.0015373352216556668,
-0.00... |
7911b509-c9f9-4e0f-8bac-1c34f1853774 | alias: ['TSVRawWithNamesAndTypes', 'RawWithNamesAndTypes']
description: 'Documentation for the TabSeparatedRawWithNamesAndTypes format'
input_format: true
keywords: ['TabSeparatedRawWithNamesAndTypes', 'TSVRawWithNamesAndTypes', 'RawWithNamesAndTypes']
output_format: true
slug: /interfaces/formats/TabSeparatedRawWithNamesAndTypes
title: 'TabSeparatedRawWithNamesAndTypes'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|---------------------------------------------------|
| β | β |
TSVRawWithNamesAndNames
,
RawWithNamesAndNames
|
Description {#description}
Differs from the
TabSeparatedWithNamesAndTypes
format,
in that the rows are written without escaping.
:::note
When parsing with this format, tabs or line-feeds are not allowed in each field.
:::
Example usage {#example-usage}
Inserting data {#inserting-data}
Using the following tsv file, named as
football.tsv
:
tsv
date season home_team away_team home_team_goals away_team_goals
Date Int16 LowCardinality(String) LowCardinality(String) Int8 Int8
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Insert the data:
sql
INSERT INTO football FROM INFILE 'football.tsv' FORMAT TabSeparatedRawWithNamesAndTypes;
Reading data {#reading-data}
Read data using the
TabSeparatedRawWithNamesAndTypes
format:
sql
SELECT *
FROM football
FORMAT TabSeparatedRawWithNamesAndTypes
The output will be in tab separated format with two header rows for column names and types: | {"source_file": "TabSeparatedRawWithNamesAndTypes.md"} | [
-0.011334346607327461,
0.0123517494648695,
0.01828029751777649,
-0.00463147833943367,
-0.02074591815471649,
-0.03194637596607208,
0.08145622164011002,
0.06456168740987778,
0.011663013137876987,
-0.004193565342575312,
0.01116653997451067,
-0.08280211687088013,
0.08312739431858063,
0.0116994... |
9e72a448-a51a-48d0-b694-5d782eb20af1 | sql
SELECT *
FROM football
FORMAT TabSeparatedRawWithNamesAndTypes
The output will be in tab separated format with two header rows for column names and types:
tsv
date season home_team away_team home_team_goals away_team_goals
Date Int16 LowCardinality(String) LowCardinality(String) Int8 Int8
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Format settings {#format-settings} | {"source_file": "TabSeparatedRawWithNamesAndTypes.md"} | [
0.060387320816516876,
-0.024043427780270576,
-0.03473489731550217,
0.017809929326176643,
0.07471407204866409,
0.06393624097108841,
0.03660181537270546,
0.016462815925478935,
-0.0772114172577858,
0.023171069100499153,
0.004148044623434544,
-0.11119549721479416,
0.005580902099609375,
-0.0011... |
9402289e-9ba8-4939-8c3e-f5e125235ca4 | alias: ['TSVRawWithNames', 'RawWithNames']
description: 'Documentation for the TabSeparatedRawWithNames format'
input_format: true
keywords: ['TabSeparatedRawWithNames', 'TSVRawWithNames', 'RawWithNames']
output_format: true
slug: /interfaces/formats/TabSeparatedRawWithNames
title: 'TabSeparatedRawWithNames'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|-----------------------------------|
| β | β |
TSVRawWithNames
,
RawWithNames
|
Description {#description}
Differs from the
TabSeparatedWithNames
format,
in that the rows are written without escaping.
:::note
When parsing with this format, tabs or line-feeds are not allowed in each field.
:::
Example usage {#example-usage}
Inserting data {#inserting-data}
Using the following tsv file, named as
football.tsv
:
tsv
date season home_team away_team home_team_goals away_team_goals
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Insert the data:
sql
INSERT INTO football FROM INFILE 'football.tsv' FORMAT TabSeparatedRawWithNames;
Reading data {#reading-data}
Read data using the
TabSeparatedRawWithNames
format:
sql
SELECT *
FROM football
FORMAT TabSeparatedRawWithNames
The output will be in tab separated format with a single line header: | {"source_file": "TabSeparatedRawWithNames.md"} | [
-0.040949247777462006,
0.0034583339001983404,
0.013943737372756004,
0.017276763916015625,
-0.03769782930612564,
-0.002394260372966528,
0.062013618648052216,
0.04281191527843475,
-0.016619155183434486,
0.01808580569922924,
-0.012277774512767792,
-0.07242196053266525,
0.08908228576183319,
0.... |
3ba4906b-1362-4f38-9bcf-ce08343953b3 | Read data using the
TabSeparatedRawWithNames
format:
sql
SELECT *
FROM football
FORMAT TabSeparatedRawWithNames
The output will be in tab separated format with a single line header:
tsv
date season home_team away_team home_team_goals away_team_goals
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Format settings {#format-settings} | {"source_file": "TabSeparatedRawWithNames.md"} | [
0.044068142771720886,
-0.046361614018678665,
-0.04523267224431038,
0.014500441960990429,
0.060488440096378326,
0.05989334359765053,
0.026997268199920654,
0.009820663370192051,
-0.08145754039287567,
0.03146525099873543,
0.01268918439745903,
-0.09036467224359512,
0.014895276166498661,
-0.016... |
8468d0b1-8f9a-450e-a755-c031d00877a6 | description: 'Documentation for the TabSeparatedWithNamesAndTypes format'
keywords: ['TabSeparatedWithNamesAndTypes']
slug: /interfaces/formats/TabSeparatedWithNamesAndTypes
title: 'TabSeparatedWithNamesAndTypes'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|------------------------------------------------|
| β | β |
TSVWithNamesAndTypes
,
RawWithNamesAndTypes
|
Description {#description}
Differs from the
TabSeparated
format in that the column names are written to the first row, while the column types are in the second row.
:::note
- If setting
input_format_with_names_use_header
is set to
1
,
the columns from the input data will be mapped to the columns in the table by their names, columns with unknown names will be skipped if setting
input_format_skip_unknown_fields
is set to 1.
Otherwise, the first row will be skipped.
- If setting
input_format_with_types_use_header
is set to
1
,
the types from input data will be compared with the types of the corresponding columns from the table. Otherwise, the second row will be skipped.
:::
Example usage {#example-usage}
Inserting data {#inserting-data}
Using the following tsv file, named as
football.tsv
:
tsv
date season home_team away_team home_team_goals away_team_goals
Date Int16 LowCardinality(String) LowCardinality(String) Int8 Int8
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Insert the data:
sql
INSERT INTO football FROM INFILE 'football.tsv' FORMAT TabSeparatedWithNamesAndTypes;
Reading data {#reading-data}
Read data using the
TabSeparatedWithNamesAndTypes
format:
sql
SELECT *
FROM football
FORMAT TabSeparatedWithNamesAndTypes
The output will be in tab separated format with two header rows for column names and types: | {"source_file": "TabSeparatedWithNamesAndTypes.md"} | [
-0.00042627129005268216,
-0.006153352092951536,
-0.036622583866119385,
0.010646515525877476,
-0.024505285546183586,
-0.042332399636507034,
0.08594415336847305,
0.03355974331498146,
-0.01271266583353281,
0.030008893460035324,
0.026510857045650482,
-0.08056634664535522,
0.100865438580513,
-0... |
400aa7af-2457-4d87-83b5-8c383723bce0 | sql
SELECT *
FROM football
FORMAT TabSeparatedWithNamesAndTypes
The output will be in tab separated format with two header rows for column names and types:
tsv
date season home_team away_team home_team_goals away_team_goals
Date Int16 LowCardinality(String) LowCardinality(String) Int8 Int8
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Format settings {#format-settings} | {"source_file": "TabSeparatedWithNamesAndTypes.md"} | [
0.057199627161026,
-0.025777965784072876,
-0.036801211535930634,
0.013192855753004551,
0.0796302855014801,
0.06403305381536484,
0.03527793660759926,
0.017963655292987823,
-0.08188805729150772,
0.02770264446735382,
0.0037125786766409874,
-0.11160789430141449,
0.011124377138912678,
0.0023898... |
3527f739-5080-442b-bf8a-30eb1723c2b5 | alias: ['TSV']
description: 'Documentation for the TSV format'
input_format: true
keywords: ['TabSeparated', 'TSV']
output_format: true
slug: /interfaces/formats/TabSeparated
title: 'TabSeparated'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|--------|
| β | β |
TSV
|
Description {#description}
In TabSeparated format, data is written by row. Each row contains values separated by tabs. Each value is followed by a tab, except the last value in the row, which is followed by a line feed. Strictly Unix line feeds are assumed everywhere. The last row also must contain a line feed at the end. Values are written in text format, without enclosing quotation marks, and with special characters escaped.
This format is also available under the name
TSV
.
The
TabSeparated
format is convenient for processing data using custom programs and scripts. It is used by default in the HTTP interface, and in the command-line client's batch mode. This format also allows transferring data between different DBMSs. For example, you can get a dump from MySQL and upload it to ClickHouse, or vice versa.
The
TabSeparated
format supports outputting total values (when using WITH TOTALS) and extreme values (when 'extremes' is set to 1). In these cases, the total values and extremes are output after the main data. The main result, total values, and extremes are separated from each other by an empty line. Example:
```sql
SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT TabSeparated
2014-03-17 1406958
2014-03-18 1383658
2014-03-19 1405797
2014-03-20 1353623
2014-03-21 1245779
2014-03-22 1031592
2014-03-23 1046491
1970-01-01 8873898
2014-03-17 1031592
2014-03-23 1406958
```
Data formatting {#tabseparated-data-formatting}
Integer numbers are written in decimal form. Numbers can contain an extra "+" character at the beginning (ignored when parsing, and not recorded when formatting). Non-negative numbers can't contain the negative sign. When reading, it is allowed to parse an empty string as a zero, or (for signed types) a string consisting of just a minus sign as a zero. Numbers that do not fit into the corresponding data type may be parsed as a different number, without an error message.
Floating-point numbers are written in decimal form. The dot is used as the decimal separator. Exponential entries are supported, as are 'inf', '+inf', '-inf', and 'nan'. An entry of floating-point numbers may begin or end with a decimal point.
During formatting, accuracy may be lost on floating-point numbers.
During parsing, it is not strictly required to read the nearest machine-representable number. | {"source_file": "TabSeparated.md"} | [
-0.04627855494618416,
0.0183687936514616,
-0.06303457170724869,
-0.017124991863965988,
-0.05852947011590004,
-0.029556766152381897,
0.06359398365020752,
0.09662698209285736,
0.01544755045324564,
0.056883636862039566,
-0.0074553461745381355,
0.02340523898601532,
0.06745141744613647,
0.01616... |
32bf70c1-2fa9-46d6-9aa8-69e65d23247b | Dates are written in YYYY-MM-DD format and parsed in the same format, but with any characters as separators.
Dates with times are written in the format
YYYY-MM-DD hh:mm:ss
and parsed in the same format, but with any characters as separators.
This all occurs in the system time zone at the time the client or server starts (depending on which of them formats data). For dates with times, daylight saving time is not specified. So if a dump has times during daylight saving time, the dump does not unequivocally match the data, and parsing will select one of the two times.
During a read operation, incorrect dates and dates with times can be parsed with natural overflow or as null dates and times, without an error message.
As an exception, parsing dates with times is also supported in Unix timestamp format, if it consists of exactly 10 decimal digits. The result is not time zone-dependent. The formats
YYYY-MM-DD hh:mm:ss
and
NNNNNNNNNN
are differentiated automatically.
Strings are output with backslash-escaped special characters. The following escape sequences are used for output:
\b
,
\f
,
\r
,
\n
,
\t
,
\0
,
\'
,
\\
. Parsing also supports the sequences
\a
,
\v
, and
\xHH
(hex escape sequences) and any
\c
sequences, where
c
is any character (these sequences are converted to
c
). Thus, reading data supports formats where a line feed can be written as
\n
or
\
, or as a line feed. For example, the string
Hello world
with a line feed between the words instead of space can be parsed in any of the following variations:
```text
Hello\nworld
Hello\
world
```
The second variant is supported because MySQL uses it when writing tab-separated dumps.
The minimum set of characters that you need to escape when passing data in TabSeparated format: tab, line feed (LF) and backslash.
Only a small set of symbols are escaped. You can easily stumble onto a string value that your terminal will ruin in output.
Arrays are written as a list of comma-separated values in square brackets. Number items in the array are formatted as normally.
Date
and
DateTime
types are written in single quotes. Strings are written in single quotes with the same escaping rules as above.
NULL
is formatted according to setting
format_tsv_null_representation
(default value is
\N
).
In input data, ENUM values can be represented as names or as ids. First, we try to match the input value to the ENUM name. If we fail and the input value is a number, we try to match this number to ENUM id.
If input data contains only ENUM ids, it's recommended to enable the setting
input_format_tsv_enum_as_number
to optimize ENUM parsing.
Each element of
Nested
structures is represented as an array.
For example:
sql
CREATE TABLE nestedt
(
`id` UInt8,
`aux` Nested(
a UInt8,
b String
)
)
ENGINE = TinyLog
sql
INSERT INTO nestedt VALUES ( 1, [1], ['a'])
sql
SELECT * FROM nestedt FORMAT TSV
response
1 [1] ['a'] | {"source_file": "TabSeparated.md"} | [
-0.04149565100669861,
0.01965857855975628,
0.03632159158587456,
0.008292803540825844,
0.02113375812768936,
-0.05822728946805,
-0.040916558355093,
0.024901511147618294,
0.050318147987127304,
-0.0011667273938655853,
-0.0027065682224929333,
-0.0007879917393438518,
-0.03125894442200661,
-0.007... |
ee08453b-5914-4f60-ba79-0512601f908c | sql
INSERT INTO nestedt VALUES ( 1, [1], ['a'])
sql
SELECT * FROM nestedt FORMAT TSV
response
1 [1] ['a']
Example usage {#example-usage}
Inserting data {#inserting-data}
Using the following tsv file, named as
football.tsv
:
tsv
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Insert the data:
sql
INSERT INTO football FROM INFILE 'football.tsv' FORMAT TabSeparated;
Reading data {#reading-data}
Read data using the
TabSeparated
format:
sql
SELECT *
FROM football
FORMAT TabSeparated
The output will be in tab separated format:
tsv
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Format settings {#format-settings} | {"source_file": "TabSeparated.md"} | [
0.002422529039904475,
-0.06680313497781754,
0.003225787775591016,
0.05058065429329872,
-0.015219831839203835,
0.009129675105214119,
0.031050102785229683,
-0.007912431843578815,
-0.07190753519535065,
0.09977114945650101,
0.037427112460136414,
-0.06163644418120384,
0.012591646984219551,
0.01... |
5c8685a7-e039-4037-9e09-f438bee22f47 | | Setting | Description | Default |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
|
format_tsv_null_representation
| Custom NULL representation in TSV format. |
\N
|
|
input_format_tsv_empty_as_default
| treat empty fields in TSV input as default values. For complex default expressions
input_format_defaults_for_omitted_fields
must be enabled too. |
false
|
|
input_format_tsv_enum_as_number
| treat inserted enum values in TSV formats as enum indices. |
false
|
|
input_format_tsv_use_best_effort_in_schema_inference
| use some tweaks and heuristics to infer schema in TSV format. If disabled, all fields will be inferred as Strings. |
true
|
|
output_format_tsv_crlf_end_of_line
| if it is set true, end of line in TSV output format will be
\r\n
instead of
\n
. |
false
|
|
input_format_tsv_crlf_end_of_line
| if it is set true, end of line in TSV input format will be
\r\n
instead of
\n
. |
false
|
|
input_format_tsv_skip_first_lines | {"source_file": "TabSeparated.md"} | [
0.02628874033689499,
0.04149900749325752,
-0.007149931043386459,
-0.0076652951538562775,
-0.06222875788807869,
0.06969451159238815,
0.012696566060185432,
0.046962615102529526,
0.028385523706674576,
-0.05742150917649269,
0.02139538899064064,
-0.04820146784186363,
-0.025498265400528908,
-0.0... |
517ec869-06a1-451c-be94-ecc41bc86f0e | false
|
|
input_format_tsv_skip_first_lines
| skip specified number of lines at the beginning of data. |
0
|
|
input_format_tsv_detect_header
| automatically detect header with names and types in TSV format. |
true
|
|
input_format_tsv_skip_trailing_empty_lines
| skip trailing empty lines at the end of data. |
false
|
|
input_format_tsv_allow_variable_number_of_columns
| allow variable number of columns in TSV format, ignore extra columns and use default values on missing columns. |
false
| | {"source_file": "TabSeparated.md"} | [
-0.0012712040916085243,
0.03991234675049782,
-0.04616745561361313,
0.03599485754966736,
0.005875322036445141,
-0.006623037159442902,
-0.05126281455159187,
-0.018228638917207718,
-0.04592665284872055,
0.05612355098128319,
0.021970689296722412,
-0.024453358724713326,
0.01555907353758812,
-0.... |
d2c212bd-e83f-4562-9fcd-f6df92413b01 | alias: ['TSVWithNames']
description: 'Documentation for the TabSeparatedWithNames format'
input_format: true
keywords: ['TabSeparatedWithNames']
output_format: true
slug: /interfaces/formats/TabSeparatedWithNames
title: 'TabSeparatedWithNames'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|--------------------------------|
| β | β |
TSVWithNames
,
RawWithNames
|
Description {#description}
Differs from the
TabSeparated
format in that the column names are written in the first row.
During parsing, the first row is expected to contain the column names. You can use column names to determine their position and to check their correctness.
:::note
If setting
input_format_with_names_use_header
is set to
1
,
the columns from the input data will be mapped to the columns of the table by their names, columns with unknown names will be skipped if setting
input_format_skip_unknown_fields
is set to
1
.
Otherwise, the first row will be skipped.
:::
Example usage {#example-usage}
Inserting data {#inserting-data}
Using the following tsv file, named as
football.tsv
:
tsv
date season home_team away_team home_team_goals away_team_goals
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Insert the data:
sql
INSERT INTO football FROM INFILE 'football.tsv' FORMAT TabSeparatedWithNames;
Reading data {#reading-data}
Read data using the
TabSeparatedWithNames
format:
sql
SELECT *
FROM football
FORMAT TabSeparatedWithNames
The output will be in tab separated format: | {"source_file": "TabSeparatedWithNames.md"} | [
0.0013774795224890113,
-0.009185148403048515,
-0.016961779445409775,
-0.00660022022202611,
-0.05300562456250191,
-0.03907187283039093,
0.08044907450675964,
0.07149823755025864,
-0.029439853504300117,
0.013480877503752708,
0.008219714276492596,
-0.05832737684249878,
0.07235948741436005,
0.0... |
0c79eabf-eeaf-462f-860d-a11741c239ee | Reading data {#reading-data}
Read data using the
TabSeparatedWithNames
format:
sql
SELECT *
FROM football
FORMAT TabSeparatedWithNames
The output will be in tab separated format:
tsv
date season home_team away_team home_team_goals away_team_goals
2022-04-30 2021 Sutton United Bradford City 1 4
2022-04-30 2021 Swindon Town Barrow 2 1
2022-04-30 2021 Tranmere Rovers Oldham Athletic 2 0
2022-05-02 2021 Port Vale Newport County 1 2
2022-05-02 2021 Salford City Mansfield Town 2 2
2022-05-07 2021 Barrow Northampton Town 1 3
2022-05-07 2021 Bradford City Carlisle United 2 0
2022-05-07 2021 Bristol Rovers Scunthorpe United 7 0
2022-05-07 2021 Exeter City Port Vale 0 1
2022-05-07 2021 Harrogate Town A.F.C. Sutton United 0 2
2022-05-07 2021 Hartlepool United Colchester United 0 2
2022-05-07 2021 Leyton Orient Tranmere Rovers 0 1
2022-05-07 2021 Mansfield Town Forest Green Rovers 2 2
2022-05-07 2021 Newport County Rochdale 0 2
2022-05-07 2021 Oldham Athletic Crawley Town 3 3
2022-05-07 2021 Stevenage Borough Salford City 4 2
2022-05-07 2021 Walsall Swindon Town 0 3
Format settings {#format-settings} | {"source_file": "TabSeparatedWithNames.md"} | [
0.059909649193286896,
-0.05814942717552185,
-0.036706071346998215,
0.015678146854043007,
0.047190118581056595,
0.0441107302904129,
0.06049654260277748,
-0.0012477573473006487,
-0.08076553046703339,
0.039775799959897995,
0.035277679562568665,
-0.05948464572429657,
0.010309221222996712,
-0.0... |
af93c8a1-fd16-4fcd-8fc7-fe80faa2e9fd | alias: []
description: 'Documentation for the PrettyCompactNoEscapesMonoBlock format'
input_format: false
keywords: ['PrettyCompactNoEscapesMonoBlock']
output_format: true
slug: /interfaces/formats/PrettyCompactNoEscapesMonoBlock
title: 'PrettyCompactNoEscapesMonoBlock'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
Differs from the
PrettyCompactNoEscapes
format in that up to
10,000
rows are buffered,
and then output as a single table, and not by
blocks
.
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "PrettyCompactNoEscapesMonoBlock.md"} | [
-0.008328915573656559,
0.014229631051421165,
0.015618291683495045,
0.004716869909316301,
0.002828421304002404,
-0.05665718764066696,
-0.03096453659236431,
0.049136750400066376,
-0.022931179031729698,
-0.0062127551063895226,
-0.025380350649356842,
-0.02492617629468441,
0.07026001065969467,
... |
496acff9-3d53-4faa-84a2-bb0a9ee3da1a | alias: []
description: 'Documentation for the Pretty format'
input_format: false
keywords: ['Pretty']
output_format: true
slug: /interfaces/formats/Pretty
title: 'Pretty'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
The
Pretty
format outputs data as Unicode-art tables,
using ANSI-escape sequences for displaying colors in the terminal.
A full grid of the table is drawn, and each row occupies two lines in the terminal.
Each result block is output as a separate table.
This is necessary so that blocks can be output without buffering results (buffering would be necessary to pre-calculate the visible width of all the values).
NULL
is output as
α΄Ία΅α΄Έα΄Έ
.
Example usage {#example-usage}
Example (shown for the
PrettyCompact
format):
sql title="Query"
SELECT * FROM t_null
response title="Response"
ββxββ¬ββββyββ
β 1 β α΄Ία΅α΄Έα΄Έ β
βββββ΄βββββββ
Rows are not escaped in any of the
Pretty
formats. The following example is shown for the
PrettyCompact
format:
sql title="Query"
SELECT 'String with \'quotes\' and \t character' AS Escaping_test
response title="Response"
ββEscaping_testβββββββββββββββββββββββββ
β String with 'quotes' and character β
ββββββββββββββββββββββββββββββββββββββββ
To avoid dumping too much data to the terminal, only the first
10,000
rows are printed.
If the number of rows is greater than or equal to
10,000
, the message "Showed first 10 000" is printed.
:::note
This format is only appropriate for outputting a query result, but not for parsing data.
:::
The Pretty format supports outputting total values (when using
WITH TOTALS
) and extremes (when 'extremes' is set to 1).
In these cases, total values and extreme values are output after the main data, in separate tables.
This is shown in the following example which uses the
PrettyCompact
format:
sql title="Query"
SELECT EventDate, count() AS c
FROM test.hits
GROUP BY EventDate
WITH TOTALS
ORDER BY EventDate
FORMAT PrettyCompact
```response title="Response"
βββEventDateββ¬βββββββcββ
β 2014-03-17 β 1406958 β
β 2014-03-18 β 1383658 β
β 2014-03-19 β 1405797 β
β 2014-03-20 β 1353623 β
β 2014-03-21 β 1245779 β
β 2014-03-22 β 1031592 β
β 2014-03-23 β 1046491 β
ββββββββββββββ΄ββββββββββ
Totals:
βββEventDateββ¬βββββββcββ
β 1970-01-01 β 8873898 β
ββββββββββββββ΄ββββββββββ
Extremes:
βββEventDateββ¬βββββββcββ
β 2014-03-17 β 1031592 β
β 2014-03-23 β 1406958 β
ββββββββββββββ΄ββββββββββ
```
Format settings {#format-settings} | {"source_file": "Pretty.md"} | [
0.0009966525249183178,
0.018960578367114067,
-0.005242851562798023,
0.036685798317193985,
-0.04080525040626526,
-0.034325603395700455,
0.006384940352290869,
0.07605091482400894,
-0.0036480892449617386,
-0.013845491223037243,
-0.03200754150748253,
-0.024002879858016968,
0.09623480588197708,
... |
311572ba-487d-4053-b5d9-0df306ccddd0 | alias: []
description: 'Documentation for the PrettyCompactMonoBlock format'
input_format: false
keywords: ['PrettyCompactMonoBlock']
output_format: true
slug: /interfaces/formats/PrettyCompactMonoBlock
title: 'PrettyCompactMonoBlock'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
Differs from the
PrettyCompact
format in that up to
10,000
rows are buffered,
and then output as a single table, and not by
blocks
.
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "PrettyCompactMonoBlock.md"} | [
-0.009457283653318882,
-0.00304250861518085,
0.0226434413343668,
0.02084925025701523,
0.004364960826933384,
-0.029311716556549072,
-0.051883306354284286,
0.08534801751375198,
-0.02516569197177887,
-0.012442131526768208,
-0.02632131800055504,
-0.012907085008919239,
0.05030447617173195,
-0.0... |
8675ecc4-3770-4f9e-9fc0-cf75d9ae8d4a | alias: []
description: 'Documentation for the PrettyMonoBlock format'
input_format: false
keywords: ['PrettyMonoBlock']
output_format: true
slug: /interfaces/formats/PrettyMonoBlock
title: 'PrettyMonoBlock'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
Differs from the
Pretty
format in that up to
10,000
rows are buffered,
and then output as a single table, and not by
blocks
.
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "PrettyMonoBlock.md"} | [
-0.014827012084424496,
0.006744987331330776,
0.014489860273897648,
0.021736716851592064,
0.0055744843557477,
-0.031236303970217705,
-0.029655974358320236,
0.08802350610494614,
-0.01128129381686449,
-0.0033769854344427586,
-0.03217853978276253,
0.0009485126356594265,
0.04382243752479553,
-0... |
1fcff14c-ebc5-48f5-bb6e-f7e5c6ab6aef | alias: []
description: 'Documentation for the PrettySpaceNoEscapes format'
input_format: false
keywords: ['PrettySpaceNoEscapes']
output_format: true
slug: /interfaces/formats/PrettySpaceNoEscapes
title: 'PrettySpaceNoEscapes'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
Differs from the
PrettySpace
format in that
ANSI-escape sequences
are not used.
This is necessary for displaying this format in a browser, as well as for using the 'watch' command-line utility.
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "PrettySpaceNoEscapes.md"} | [
-0.04544364660978317,
0.029572119936347008,
0.03350704908370972,
0.022588614374399185,
-0.0017883427208289504,
-0.009781851433217525,
0.01712227612733841,
0.03963339328765869,
0.014602124691009521,
-0.014953204430639744,
-0.021262751892209053,
-0.028205055743455887,
0.05888878181576729,
-0... |
492f3415-e83e-4bfd-a099-9eb3ec479234 | alias: []
description: 'Documentation for the PrettySpaceNoEscapesMonoBlock format'
input_format: false
keywords: ['PrettySpaceNoEscapesMonoBlock']
output_format: true
slug: /interfaces/formats/PrettySpaceNoEscapesMonoBlock
title: 'PrettySpaceNoEscapesMonoBlock'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
Differs from the
PrettySpaceNoEscapes
format in that up to
10,000
rows are buffered,
and then output as a single table, and not by
blocks
.
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "PrettySpaceNoEscapesMonoBlock.md"} | [
-0.014837626367807388,
0.012800347059965134,
0.026486104354262352,
0.00984257087111473,
-0.004992951173335314,
-0.051966387778520584,
-0.020175892859697342,
0.042375724762678146,
-0.025384200736880302,
0.004452777560800314,
-0.02374083921313286,
-0.021398130804300308,
0.06532323360443115,
... |
f383f864-87a2-401d-9df5-49088c7751b1 | alias: []
description: 'Documentation for the PrettySpace format'
input_format: false
keywords: ['PrettySpace']
output_format: true
slug: /interfaces/formats/PrettySpace
title: 'PrettySpace'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
Differs from the
PrettyCompact
format in that whitespace
(space characters) is used for displaying the table instead of a grid.
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "PrettySpace.md"} | [
-0.004677401389926672,
0.015515544451773167,
-0.0042435647919774055,
0.03351712226867676,
0.0065485285595059395,
-0.006618699058890343,
-0.025271132588386536,
0.1104864627122879,
0.020361317321658134,
0.03031930886209011,
-0.023144373670220375,
-0.00944340880960226,
0.05429678037762642,
-0... |
3ea000be-4e11-41e6-a0d8-621b79664dd0 | alias: []
description: 'Documentation for the PrettyCompact format'
input_format: false
keywords: ['PrettyCompact']
output_format: true
slug: /interfaces/formats/PrettyCompact
title: 'PrettyCompact'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
Differs from the
Pretty
format in that the table is displayed with a grid drawn between rows.
Because of this the result is more compact.
:::note
This format is used by default in the command-line client in interactive mode.
:::
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "PrettyCompact.md"} | [
-0.004339170642197132,
0.015096728689968586,
-0.002851419383659959,
0.03995310515165329,
0.0007692722720094025,
-0.008101952262222767,
-0.0282010305672884,
0.10673651099205017,
0.001653054030612111,
0.011582992039620876,
-0.01278271246701479,
-0.015794379636645317,
0.06559555232524872,
0.0... |
2fa2cbdb-345d-41b7-afde-899b85d690a3 | alias: []
description: 'Documentation for the PrettySpaceMonoBlock format'
input_format: false
keywords: ['PrettySpaceMonoBlock']
output_format: true
slug: /interfaces/formats/PrettySpaceMonoBlock
title: 'PrettySpaceMonoBlock'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
Differs from the
PrettySpace
format in that up to
10,000
rows are buffered,
and then output as a single table, and not by
blocks
.
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "PrettySpaceMonoBlock.md"} | [
-0.036063745617866516,
-0.009838946163654327,
0.007894928567111492,
0.022999823093414307,
-0.002595603233203292,
-0.020425202324986458,
-0.01784646138548851,
0.08153865486383438,
0.000021796849978272803,
-0.005761397536844015,
-0.022513385862112045,
-0.00818718783557415,
0.04699794948101044,... |
732a97d4-ba4c-4860-8d44-52bc3ea7e05d | alias: []
description: 'Documentation for the PrettyNoEscapes format'
input_format: false
keywords: ['PrettyNoEscapes']
output_format: true
slug: /interfaces/formats/PrettyNoEscapes
title: 'PrettyNoEscapes'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
Differs from
Pretty
in that
ANSI-escape sequences
aren't used.
This is necessary for displaying the format in a browser, as well as for using the 'watch' command-line utility.
Example usage {#example-usage}
Example:
bash
$ watch -n1 "clickhouse-client --query='SELECT event, value FROM system.events FORMAT PrettyCompactNoEscapes'"
:::note
The
HTTP interface
can be used for displaying this format in the browser.
:::
Format settings {#format-settings} | {"source_file": "PrettyNoEscapes.md"} | [
-0.011921992525458336,
0.05782032385468483,
0.017630940303206444,
0.013486583717167377,
0.012098447419703007,
-0.01624109596014023,
0.036162540316581726,
0.020284421741962433,
0.04134499281644821,
-0.009654042311012745,
-0.03579209744930267,
-0.04249754548072815,
0.03626599535346031,
0.001... |
8d6d0842-35fa-4c50-b104-d0b68852d1d2 | alias: []
description: 'Documentation for the PrettyNoEscapesMonoBlock format'
input_format: false
keywords: ['PrettyNoEscapesMonoBlock']
output_format: true
slug: /interfaces/formats/PrettyNoEscapesMonoBlock
title: 'PrettyNoEscapesMonoBlock'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
Differs from the
PrettyNoEscapes
format in that up to
10,000
rows are buffered,
and then output as a single table, and not by blocks.
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "PrettyNoEscapesMonoBlock.md"} | [
-0.0008071615593507886,
0.02695278823375702,
0.03542238101363182,
0.003386344760656357,
0.008828903548419476,
-0.06237891688942909,
-0.02438017539680004,
0.0413084551692009,
-0.03392990678548813,
0.004379656631499529,
-0.027834005653858185,
-0.014648910611867905,
0.0639914944767952,
-0.033... |
5e715482-464a-4aa0-84a5-f2a54e7cb37d | alias: []
description: 'Documentation for the PrettyCompactNoEscapes format'
input_format: false
keywords: ['PrettyCompactNoEscapes']
output_format: true
slug: /interfaces/formats/PrettyCompactNoEscapes
title: 'PrettyCompactNoEscapes'
doc_type: 'reference'
import PrettyFormatSettings from './_snippets/common-pretty-format-settings.md';
| Input | Output | Alias |
|-------|---------|-------|
| β | β | |
Description {#description}
Differs from the
PrettyCompact
format in that
ANSI-escape sequences
aren't used.
This is necessary for displaying the format in a browser, as well as for using the 'watch' command-line utility.
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "PrettyCompactNoEscapes.md"} | [
-0.03135557845234871,
0.03198486566543579,
0.03311162814497948,
0.029575064778327942,
0.011147935874760151,
-0.021965330466628075,
-0.007382137235254049,
0.0455346442759037,
-0.010601463727653027,
-0.028775129467248917,
-0.028423959389328957,
-0.03112909197807312,
0.06303036212921143,
0.00... |
473a3b5b-8129-44d5-ad8c-5fdc027c0935 | alias: []
description: 'Documentation for the Parquet format'
input_format: true
keywords: ['Parquet']
output_format: true
slug: /interfaces/formats/Parquet
title: 'Parquet'
doc_type: 'reference'
| Input | Output | Alias |
|-------|--------|-------|
| β | β | |
Description {#description}
Apache Parquet
is a columnar storage format widespread in the Hadoop ecosystem. ClickHouse supports read and write operations for this format.
Data types matching {#data-types-matching-parquet}
The table below shows how Parquet data types match ClickHouse
data types
.
| Parquet type (logical, converted, or physical) | ClickHouse data type |
|------------------------------------------------|----------------------|
|
BOOLEAN
|
Bool
|
|
UINT_8
|
UInt8
|
|
INT_8
|
Int8
|
|
UINT_16
|
UInt16
|
|
INT_16
|
Int16
/
Enum16
|
|
UINT_32
|
UInt32
|
|
INT_32
|
Int32
|
|
UINT_64
|
UInt64
|
|
INT_64
|
Int64
|
|
DATE
|
Date32
|
|
TIMESTAMP
,
TIME
|
DateTime64
|
|
FLOAT
|
Float32
|
|
DOUBLE
|
Float64
|
|
INT96
|
DateTime64(9, 'UTC')
|
|
BYTE_ARRAY
,
UTF8
,
ENUM
,
BSON
|
String
|
|
JSON
|
JSON
|
|
FIXED_LEN_BYTE_ARRAY
|
FixedString
|
|
DECIMAL
|
Decimal
|
|
LIST
|
Array
|
|
MAP
|
Map
|
| struct |
Tuple
|
|
FLOAT16
|
Float32
|
|
UUID
|
FixedString(16)
|
|
INTERVAL
|
FixedString(12)
|
When writing Parquet file, data types that don't have a matching Parquet type are converted to the nearest available type:
| ClickHouse data type | Parquet type |
|----------------------|--------------|
|
IPv4
|
UINT_32
|
|
IPv6
|
FIXED_LEN_BYTE_ARRAY
(16 bytes) |
|
Date
(16 bits) |
DATE
(32 bits) |
|
DateTime
(32 bits, seconds) |
TIMESTAMP
(64 bits, milliseconds) |
|
Int128/UInt128/Int256/UInt256
|
FIXED_LEN_BYTE_ARRAY
(16/32 bytes, little-endian) |
Arrays can be nested and can have a value of
Nullable
type as an argument.
Tuple
and
Map
types can also be nested.
Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then
casts
the data to that data type which is set for the ClickHouse table column. E.g. a
UINT_32
Parquet column can be read into an
IPv4
ClickHouse column. | {"source_file": "Parquet.md"} | [
-0.044845011085271835,
0.010595777072012424,
-0.031986940652132034,
-0.03956577926874161,
-0.10008854418992996,
-0.05718689784407616,
0.005837823264300823,
0.020936517044901848,
0.010891390033066273,
0.03439551591873169,
-0.012672504410147667,
-0.018896332010626793,
0.03863631188869476,
-0... |
15c75a96-bd0e-4636-903c-33d21d9b36d9 | For some Parquet types there's no closely matching ClickHouse type. We read them as follows:
*
TIME
(time of day) is read as a timestamp. E.g.
10:23:13.000
becomes
1970-01-01 10:23:13.000
.
*
TIMESTAMP
/
TIME
with
isAdjustedToUTC=false
is a local wall-clock time (year, month, day, hour, minute, second and subsecond fields in a local timezone, regardless of what specific time zone is considered local), same as SQL
TIMESTAMP WITHOUT TIME ZONE
. ClickHouse reads it as if it were a UTC timestamp instead. E.g.
2025-09-29 18:42:13.000
(representing a reading of a local wall clock) becomes
2025-09-29 18:42:13.000
(
DateTime64(3, 'UTC')
representing a point in time). If converted to String, it shows the correct year, month, day, hour, minute, second and subsecond, which can then be interpreted as being in some local timezone instead of UTC. Counterintuitively, changing the type from
DateTime64(3, 'UTC')
to
DateTime64(3)
would not help as both types represent a point in time rather than a clock reading, but
DateTime64(3)
would incorrectly be formatted using local timezone.
*
INTERVAL
is currently read as
FixedString(12)
with raw binary representation of the time interval, as encoded in Parquet file.
Example usage {#example-usage}
Inserting data {#inserting-data}
Using a Parquet file with the following data, named as
football.parquet
: | {"source_file": "Parquet.md"} | [
-0.0138777457177639,
-0.029357898980379105,
-0.019417654722929,
-0.018208812922239304,
-0.038980722427368164,
-0.06156264245510101,
0.012777681462466717,
-0.001180311432108283,
-0.015489550307393074,
-0.04620281234383583,
-0.01460597850382328,
-0.04870789125561714,
-0.04059809073805809,
0.... |
e0182724-520e-4252-8720-c0e222947a41 | Example usage {#example-usage}
Inserting data {#inserting-data}
Using a Parquet file with the following data, named as
football.parquet
:
text
ββββββββdateββ¬βseasonββ¬βhome_teamββββββββββββββ¬βaway_teamββββββββββββ¬βhome_team_goalsββ¬βaway_team_goalsββ
1. β 2022-04-30 β 2021 β Sutton United β Bradford City β 1 β 4 β
2. β 2022-04-30 β 2021 β Swindon Town β Barrow β 2 β 1 β
3. β 2022-04-30 β 2021 β Tranmere Rovers β Oldham Athletic β 2 β 0 β
4. β 2022-05-02 β 2021 β Port Vale β Newport County β 1 β 2 β
5. β 2022-05-02 β 2021 β Salford City β Mansfield Town β 2 β 2 β
6. β 2022-05-07 β 2021 β Barrow β Northampton Town β 1 β 3 β
7. β 2022-05-07 β 2021 β Bradford City β Carlisle United β 2 β 0 β
8. β 2022-05-07 β 2021 β Bristol Rovers β Scunthorpe United β 7 β 0 β
9. β 2022-05-07 β 2021 β Exeter City β Port Vale β 0 β 1 β
10. β 2022-05-07 β 2021 β Harrogate Town A.F.C. β Sutton United β 0 β 2 β
11. β 2022-05-07 β 2021 β Hartlepool United β Colchester United β 0 β 2 β
12. β 2022-05-07 β 2021 β Leyton Orient β Tranmere Rovers β 0 β 1 β
13. β 2022-05-07 β 2021 β Mansfield Town β Forest Green Rovers β 2 β 2 β
14. β 2022-05-07 β 2021 β Newport County β Rochdale β 0 β 2 β
15. β 2022-05-07 β 2021 β Oldham Athletic β Crawley Town β 3 β 3 β
16. β 2022-05-07 β 2021 β Stevenage Borough β Salford City β 4 β 2 β
17. β 2022-05-07 β 2021 β Walsall β Swindon Town β 0 β 3 β
ββββββββββββββ΄βββββββββ΄ββββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββ
Insert the data:
sql
INSERT INTO football FROM INFILE 'football.parquet' FORMAT Parquet;
Reading data {#reading-data}
Read data using the
Parquet
format:
sql
SELECT *
FROM football
INTO OUTFILE 'football.parquet'
FORMAT Parquet
:::tip
Parquet is a binary format that does not display in a human-readable form on the terminal. Use the
INTO OUTFILE
to output Parquet files.
:::
To exchange data with Hadoop, you can use the
HDFS table engine
.
Format settings {#format-settings} | {"source_file": "Parquet.md"} | [
0.010581071488559246,
-0.002932063303887844,
-0.03805087134242058,
-0.05981333181262016,
-0.01556728221476078,
-0.020399508997797966,
0.07401208579540253,
-0.008298424072563648,
-0.04434164986014366,
0.0561697781085968,
0.04870208352804184,
-0.016939764842391014,
0.013462279923260212,
-0.0... |
6babf3bc-bf62-4889-a9e5-e94754e85615 | | Setting | Description | Default |
|--------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|
|
input_format_parquet_case_insensitive_column_matching
| Ignore case when matching Parquet columns with CH columns. |
0
|
|
input_format_parquet_preserve_order
| Avoid reordering rows when reading from Parquet files. Usually makes it much slower. |
0
|
|
input_format_parquet_filter_push_down
| When reading Parquet files, skip whole row groups based on the WHERE/PREWHERE expressions and min/max statistics in the Parquet metadata. |
1
|
|
input_format_parquet_bloom_filter_push_down
| When reading Parquet files, skip whole row groups based on the WHERE expressions and bloom filter in the Parquet metadata. |
0
|
|
input_format_parquet_use_native_reader
| When reading Parquet files, to use native reader instead of arrow reader. |
0
|
|
input_format_parquet_allow_missing_columns
| Allow missing columns while reading Parquet input formats |
1
|
|
input_format_parquet_local_file_min_bytes_for_seek
| Min bytes required for local read (file) to do seek, instead of read with ignore in Parquet input format |
8192
|
|
input_format_parquet_enable_row_group_prefetch | {"source_file": "Parquet.md"} | [
0.027272013947367668,
0.04274405911564827,
-0.006601972505450249,
-0.007452395744621754,
-0.060984302312135696,
0.0718095675110817,
0.012279381044209003,
0.047188855707645416,
0.028277188539505005,
-0.05718793720006943,
0.021592915058135986,
-0.04840480908751488,
-0.0257643423974514,
-0.05... |
f56b22b5-c9b6-4163-8f49-912e0382b056 | 8192
|
|
input_format_parquet_enable_row_group_prefetch
| Enable row group prefetching during parquet parsing. Currently, only single-threaded parsing can prefetch. |
1
|
|
input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference
| Skip columns with unsupported types while schema inference for format Parquet |
0
|
|
input_format_parquet_max_block_size
| Max block size for parquet reader. |
65409
|
|
input_format_parquet_prefer_block_bytes
| Average block bytes output by parquet reader |
16744704
|
|
input_format_parquet_enable_json_parsing
| When reading Parquet files, parse JSON columns as ClickHouse JSON Column. |
1
|
|
output_format_parquet_row_group_size
| Target row group size in rows. |
1000000
|
|
output_format_parquet_row_group_size_bytes
| Target row group size in bytes, before compression. |
536870912
|
|
output_format_parquet_string_as_string
| Use Parquet String type instead of Binary for String columns. |
1
|
|
output_format_parquet_fixed_string_as_fixed_byte_array
| Use Parquet FIXED_LEN_BYTE_ARRAY type instead of Binary for FixedString columns. |
1
|
| | {"source_file": "Parquet.md"} | [
-0.02660740725696087,
-0.04386972635984421,
-0.06866233050823212,
-0.07068588584661484,
-0.07296989858150482,
-0.06111002713441849,
-0.08316027373075485,
0.02008645236492157,
-0.0714053064584732,
0.019700461998581886,
-0.0302182137966156,
-0.02175631746649742,
-0.03323918208479881,
-0.0335... |
dac0f653-af7e-4a77-a8f3-1ecb8079cb5e | 1
|
|
output_format_parquet_version
| Parquet format version for output format. Supported versions: 1.0, 2.4, 2.6 and 2.latest (default) |
2.latest
|
|
output_format_parquet_compression_method
| Compression method for Parquet output format. Supported codecs: snappy, lz4, brotli, zstd, gzip, none (uncompressed) |
zstd
|
|
output_format_parquet_compliant_nested_types
| In parquet file schema, use name 'element' instead of 'item' for list elements. This is a historical artifact of Arrow library implementation. Generally increases compatibility, except perhaps with some old versions of Arrow. |
1
|
|
output_format_parquet_use_custom_encoder
| Use a faster Parquet encoder implementation. |
1
|
|
output_format_parquet_parallel_encoding
| Do Parquet encoding in multiple threads. Requires output_format_parquet_use_custom_encoder. |
1
|
|
output_format_parquet_data_page_size
| Target page size in bytes, before compression. |
1048576
|
|
output_format_parquet_batch_size
| Check page size every this many rows. Consider decreasing if you have columns with average values size above a few KBs. |
1024
|
|
output_format_parquet_write_page_index
| Add a possibility to write page index into parquet files. |
1
|
|
input_format_parquet_import_nested
| Obsolete setting, does nothing. |
0
|
|
input_format_parquet_local_time_as_utc | {"source_file": "Parquet.md"} | [
-0.04039021581411362,
-0.02334483340382576,
-0.029711226001381874,
-0.067762590944767,
-0.03043161891400814,
-0.04262847080826759,
-0.015433412976562977,
0.038128774613142014,
-0.06209950894117355,
-0.029502328485250473,
-0.010396470315754414,
0.04413227364420891,
-0.019764095544815063,
-0... |
9904a512-b26f-4224-ae42-2246c25478c7 | 0
|
|
input_format_parquet_local_time_as_utc
| true | Determines the data type used by schema inference for Parquet timestamps with isAdjustedToUTC=false. If true: DateTime64(..., 'UTC'), if false: DateTime64(...). Neither behavior is fully correct as ClickHouse doesn't have a data type for local wall-clock time. Counterintuitively, 'true' is probably the less incorrect option, because formatting the 'UTC' timestamp as String will produce representation of the correct local time. | | {"source_file": "Parquet.md"} | [
-0.008842099457979202,
0.03044760785996914,
-0.015871774405241013,
-0.06347355991601944,
-0.0072352709248661995,
-0.012331880629062653,
-0.028918521478772163,
-0.017204592004418373,
-0.016211459413170815,
-0.012033293023705482,
0.03267428278923035,
-0.06904683262109756,
-0.04353195056319237,... |
7f49021d-3b7f-4dcf-ab78-cba590acc4b9 | description: 'Documentation for the ParquetMetadata format'
keywords: ['ParquetMetadata']
slug: /interfaces/formats/ParquetMetadata
title: 'ParquetMetadata'
doc_type: 'reference'
Description {#description}
Special format for reading Parquet file metadata (https://parquet.apache.org/docs/file-format/metadata/). It always outputs one row with the next structure/content:
-
num_columns
- the number of columns
-
`num_rows
- the total number of rows
-
num_row_groups
- the total number of row groups
-
format_version
- parquet format version, always 1.0 or 2.6
-
total_uncompressed_size
- total uncompressed bytes size of the data, calculated as the sum of total_byte_size from all row groups
-
total_compressed_size
- total compressed bytes size of the data, calculated as the sum of total_compressed_size from all row groups
-
columns
- the list of columns metadata with the next structure:
-
name
- column name
-
path
- column path (differs from name for nested column)
-
max_definition_level
- maximum definition level
-
max_repetition_level
- maximum repetition level
-
physical_type
- column physical type
-
logical_type
- column logical type
-
compression
- compression used for this column
-
total_uncompressed_size
- total uncompressed bytes size of the column, calculated as the sum of total_uncompressed_size of the column from all row groups
-
total_compressed_size
- total compressed bytes size of the column, calculated as the sum of total_compressed_size of the column from all row groups
-
space_saved
- percent of space saved by compression, calculated as (1 - total_compressed_size/total_uncompressed_size).
-
encodings
- the list of encodings used for this column
-
row_groups
- the list of row groups metadata with the next structure:
-
num_columns
- the number of columns in the row group
-
num_rows
- the number of rows in the row group
-
total_uncompressed_size
- total uncompressed bytes size of the row group
-
total_compressed_size
- total compressed bytes size of the row group
-
columns
- the list of column chunks metadata with the next structure:
-
name
- column name
-
path
- column path
-
total_compressed_size
- total compressed bytes size of the column
-
total_uncompressed_size
- total uncompressed bytes size of the row group
-
have_statistics
- boolean flag that indicates if column chunk metadata contains column statistics
-
statistics
- column chunk statistics (all fields are NULL if have_statistics = false) with the next structure:
-
num_values
- the number of non-null values in the column chunk
-
null_count
- the number of NULL values in the column chunk
-
distinct_count
- the number of distinct values in the column chunk
-
min
- the minimum value of the column chunk
-
max
- the maximum column of the column chunk
Example usage {#example-usage}
Example: | {"source_file": "ParquetMetadata.md"} | [
0.004063013941049576,
0.019294489175081253,
-0.08092938363552094,
-0.08514880388975143,
-0.06653017550706863,
-0.0643322691321373,
-0.017985329031944275,
0.06623663753271103,
-0.015366174280643463,
0.046640802174806595,
0.002561277011409402,
-0.007913131266832352,
-0.0004630069888662547,
-... |
befccfde-4ace-4ca8-822d-d7ac2fb86e5b | Example usage {#example-usage}
Example:
sql
SELECT *
FROM file(data.parquet, ParquetMetadata)
FORMAT PrettyJSONEachRow
json
{
"num_columns": "2",
"num_rows": "100000",
"num_row_groups": "2",
"format_version": "2.6",
"metadata_size": "577",
"total_uncompressed_size": "282436",
"total_compressed_size": "26633",
"columns": [
{
"name": "number",
"path": "number",
"max_definition_level": "0",
"max_repetition_level": "0",
"physical_type": "INT32",
"logical_type": "Int(bitWidth=16, isSigned=false)",
"compression": "LZ4",
"total_uncompressed_size": "133321",
"total_compressed_size": "13293",
"space_saved": "90.03%",
"encodings": [
"RLE_DICTIONARY",
"PLAIN",
"RLE"
]
},
{
"name": "concat('Hello', toString(modulo(number, 1000)))",
"path": "concat('Hello', toString(modulo(number, 1000)))",
"max_definition_level": "0",
"max_repetition_level": "0",
"physical_type": "BYTE_ARRAY",
"logical_type": "None",
"compression": "LZ4",
"total_uncompressed_size": "149115",
"total_compressed_size": "13340",
"space_saved": "91.05%",
"encodings": [
"RLE_DICTIONARY",
"PLAIN",
"RLE"
]
}
],
"row_groups": [
{
"num_columns": "2",
"num_rows": "65409",
"total_uncompressed_size": "179809",
"total_compressed_size": "14163",
"columns": [
{
"name": "number",
"path": "number",
"total_compressed_size": "7070",
"total_uncompressed_size": "85956",
"have_statistics": true,
"statistics": {
"num_values": "65409",
"null_count": "0",
"distinct_count": null,
"min": "0",
"max": "999"
}
},
{
"name": "concat('Hello', toString(modulo(number, 1000)))",
"path": "concat('Hello', toString(modulo(number, 1000)))",
"total_compressed_size": "7093",
"total_uncompressed_size": "93853",
"have_statistics": true,
"statistics": {
"num_values": "65409",
"null_count": "0",
"distinct_count": null,
"min": "Hello0",
"max": "Hello999"
}
}
]
},
...
]
} | {"source_file": "ParquetMetadata.md"} | [
-0.017782147973775864,
0.05972150340676308,
-0.049193233251571655,
0.011867154389619827,
-0.04621792957186699,
-0.029834754765033722,
-0.015347476117312908,
0.08652208000421524,
-0.02221277542412281,
0.03605952113866806,
-0.003795100376009941,
0.019705479964613914,
-0.02194729633629322,
-0... |
60c8ceaa-db42-4303-8462-e6cd7d61a723 | alias: []
description: 'Documentation for the ProtobufSingle format'
input_format: true
keywords: ['ProtobufSingle']
output_format: true
slug: /interfaces/formats/ProtobufSingle
title: 'ProtobufSingle'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
| Input | Output | Alias |
|-------|--------|-------|
| β | β | |
Description {#description}
The
ProtobufSingle
format is the same as the
Protobuf
format but it is intended for storing/parsing single Protobuf messages without length delimiters.
Example usage {#example-usage}
Format settings {#format-settings} | {"source_file": "ProtobufSingle.md"} | [
-0.013321897946298122,
0.04508616030216217,
0.051057565957307816,
-0.036201875656843185,
0.03882927820086479,
-0.017052147537469864,
0.02672521397471428,
0.06376863270998001,
0.006384529639035463,
0.030812913551926613,
-0.027397233992815018,
-0.08534227311611176,
0.06303436309099197,
0.014... |
c1809d8a-61c0-4d85-bb88-d025f245ccf6 | alias: []
description: 'Documentation for the Protobuf format'
input_format: true
keywords: ['Protobuf']
output_format: true
slug: /interfaces/formats/Protobuf
title: 'Protobuf'
doc_type: 'guide'
| Input | Output | Alias |
|-------|--------|-------|
| β | β | |
Description {#description}
The
Protobuf
format is the
Protocol Buffers
format.
This format requires an external format schema, which is cached between queries.
ClickHouse supports:
- both
proto2
and
proto3
syntaxes.
-
Repeated
/
optional
/
required
fields.
To find the correspondence between table columns and fields of the Protocol Buffers' message type, ClickHouse compares their names.
This comparison is case-insensitive and the characters
_
(underscore) and
.
(dot) are considered as equal.
If the types of a column and a field of the Protocol Buffers' message are different, then the necessary conversion is applied.
Nested messages are supported. For example, for the field
z
in the following message type:
capnp
message MessageType {
message XType {
message YType {
int32 z;
};
repeated YType y;
};
XType x;
};
ClickHouse tries to find a column named
x.y.z
(or
x_y_z
or
X.y_Z
and so on).
Nested messages are suitable for input or output of a
nested data structures
.
Default values defined in a protobuf schema like the one that follows are not applied, rather the
table defaults
are used instead of them:
```capnp
syntax = "proto2";
message MessageType {
optional int32 result_per_page = 3 [default = 10];
}
```
If a message contains
oneof
and
input_format_protobuf_oneof_presence
is set, ClickHouse fills column that indicates which field of oneof was found.
```capnp
syntax = "proto3";
message StringOrString {
oneof string_oneof {
string string1 = 1;
string string2 = 42;
}
}
```
sql
CREATE TABLE string_or_string ( string1 String, string2 String, string_oneof Enum('no'=0, 'hello' = 1, 'world' = 42)) Engine=MergeTree ORDER BY tuple();
INSERT INTO string_or_string from INFILE '$CURDIR/data_protobuf/String1' SETTINGS format_schema='$SCHEMADIR/string_or_string.proto:StringOrString' FORMAT ProtobufSingle;
SELECT * FROM string_or_string
text
βββββββββββ¬ββββββββββ¬βββββββββββββββ
β string1 β string2 β string_oneof β
βββββββββββΌββββββββββΌβββββββββββββββ€
1. β β string2 β world β
βββββββββββΌββββββββββΌβββββββββββββββ€
2. β string1 β β hello β
βββββββββββ΄ββββββββββ΄βββββββββββββββ
Name of the column that indicates presence must be the same as the name of oneof. Nested messages are supported (see
basic-examples
).
Allowed types are Int8, UInt8, Int16, UInt16, Int32, UInt32, Int64, UInt64, Enum, Enum8 or Enum16.
Enum (as well as Enum8 or Enum16) must contain all oneof' possible tags plus 0 to indicate absence, string representations does not matter.
The setting
input_format_protobuf_oneof_presence
is disabled by default | {"source_file": "Protobuf.md"} | [
0.010411594063043594,
-0.020774031057953835,
-0.03029121644794941,
-0.06441983580589294,
-0.007944819517433643,
-0.062282174825668335,
0.03414086997509003,
0.0492587648332119,
-0.003839542157948017,
0.026411358267068863,
-0.0019218629458919168,
-0.10810887813568115,
0.013021250255405903,
0... |
96d6b1c0-0da7-4287-a4cc-673e945eb5ba | The setting
input_format_protobuf_oneof_presence
is disabled by default
ClickHouse inputs and outputs protobuf messages in the
length-delimited
format.
This means that before every message its length should be written as a
variable width integer (varint)
.
Example usage {#example-usage}
Reading and writing data {#basic-examples}
:::note Example files
The files used in this example are available in the
examples repository
:::
In this example we will read some data from a file
protobuf_message.bin
into a ClickHouse table. We'll then write it
back out to a file called
protobuf_message_from_clickhouse.bin
using the
Protobuf
format.
Given the file
schemafile.proto
:
```capnp
syntax = "proto3";
message MessageType {
string name = 1;
string surname = 2;
uint32 birthDate = 3;
repeated string phoneNumbers = 4;
};
```
Generating the binary file
If you already know how to serialize and deserialize data in the `Protobuf` format, you can skip this step.
We'll use Python to serialize some data into `protobuf_message.bin` and read it into ClickHouse.
If there is another language you want to use, see also: ["How to read/write length-delimited Protobuf messages in popular languages"](https://cwiki.apache.org/confluence/display/GEODE/Delimiting+Protobuf+Messages).
Run the following command to generate a Python file named `schemafile_pb2.py` in
the same directory as `schemafile.proto`. This file contains the Python classes
that represent your `UserData` Protobuf message:
```bash
protoc --python_out=. schemafile.proto
```
Now, create a new Python file named `generate_protobuf_data.py`, in the same
directory as `schemafile_pb2.py`. Paste the following code into it:
```python
import schemafile_pb2 # Module generated by 'protoc'
from google.protobuf import text_format
from google.protobuf.internal.encoder import _VarintBytes # Import the internal varint encoder
def create_user_data_message(name, surname, birthDate, phoneNumbers):
"""
Creates and populates a UserData Protobuf message.
"""
message = schemafile_pb2.MessageType()
message.name = name
message.surname = surname
message.birthDate = birthDate
message.phoneNumbers.extend(phoneNumbers)
return message
# The data for our example users
data_to_serialize = [
{"name": "Aisha", "surname": "Khan", "birthDate": 19920815, "phoneNumbers": ["(555) 247-8903", "(555) 612-3457"]},
{"name": "Javier", "surname": "Rodriguez", "birthDate": 20001015, "phoneNumbers": ["(555) 891-2046", "(555) 738-5129"]},
{"name": "Mei", "surname": "Ling", "birthDate": 19980616, "phoneNumbers": ["(555) 956-1834", "(555) 403-7682"]},
]
output_filename = "protobuf_messages.bin" | {"source_file": "Protobuf.md"} | [
0.0439782440662384,
-0.009675044566392899,
-0.05239500477910042,
-0.05292374640703201,
-0.06967169791460037,
-0.06083476170897484,
0.0339333601295948,
0.05203169211745262,
-0.06423792988061905,
0.025598494336009026,
0.009276502765715122,
-0.10534462332725525,
0.0002695213770493865,
-0.0040... |
bc5e1f87-3c41-4e23-b49d-7b566dd35461 | output_filename = "protobuf_messages.bin"
# Open the binary file in write-binary mode ('wb')
with open(output_filename, "wb") as f:
for item in data_to_serialize:
# Create a Protobuf message instance for the current user
message = create_user_data_message(
item["name"],
item["surname"],
item["birthDate"],
item["phoneNumbers"]
)
# Serialize the message
serialized_data = message.SerializeToString()
# Get the length of the serialized data
message_length = len(serialized_data)
# Use the Protobuf library's internal _VarintBytes to encode the length
length_prefix = _VarintBytes(message_length)
# Write the length prefix
f.write(length_prefix)
# Write the serialized message data
f.write(serialized_data)
print(f"Protobuf messages (length-delimited) written to {output_filename}")
# --- Optional: Verification (reading back and printing) ---
# For reading back, we'll also use the internal Protobuf decoder for varints.
from google.protobuf.internal.decoder import _DecodeVarint32
print("\n--- Verifying by reading back ---")
with open(output_filename, "rb") as f:
buf = f.read() # Read the whole file into a buffer for easier varint decoding
n = 0
while n < len(buf):
# Decode the varint length prefix
msg_len, new_pos = _DecodeVarint32(buf, n)
n = new_pos
# Extract the message data
message_data = buf[n:n+msg_len]
n += msg_len
# Parse the message
decoded_message = schemafile_pb2.MessageType()
decoded_message.ParseFromString(message_data)
print(text_format.MessageToString(decoded_message, as_utf8=True))
```
Now run the script from the command line. It is recommended to run it from a
python virtual environment, for example using `uv`:
```bash
uv venv proto-venv
source proto-venv/bin/activate
```
You will need to install the following python libraries:
```bash
uv pip install --upgrade protobuf
```
Run the script to generate the binary file:
```bash
python generate_protobuf_data.py
```
Create a ClickHouse table matching the schema:
sql
CREATE DATABASE IF NOT EXISTS test;
CREATE TABLE IF NOT EXISTS test.protobuf_messages (
name String,
surname String,
birthDate UInt32,
phoneNumbers Array(String)
)
ENGINE = MergeTree()
ORDER BY tuple()
Insert the data into the table from the command line:
bash
cat protobuf_messages.bin | clickhouse-client --query "INSERT INTO test.protobuf_messages SETTINGS format_schema='schemafile:MessageType' FORMAT Protobuf"
You can also write the data back to a binary file using the
Protobuf
format:
sql
SELECT * FROM test.protobuf_messages INTO OUTFILE 'protobuf_message_from_clickhouse.bin' FORMAT Protobuf SETTINGS format_schema = 'schemafile:MessageType' | {"source_file": "Protobuf.md"} | [
0.023256899788975716,
0.03279067575931549,
-0.024394508451223373,
-0.041914500296115875,
-0.08897323161363602,
-0.07563742250204086,
-0.018447887152433395,
0.12716682255268097,
-0.11690125614404678,
-0.005294400732964277,
0.01962774246931076,
-0.05985897406935692,
0.012461381033062935,
-0.... |
d81ce10a-e264-4478-9f84-6d0d4f1076d9 | sql
SELECT * FROM test.protobuf_messages INTO OUTFILE 'protobuf_message_from_clickhouse.bin' FORMAT Protobuf SETTINGS format_schema = 'schemafile:MessageType'
With your Protobuf schema, you can now deserialize the data which was written out from ClickHouse to file
protobuf_message_from_clickhouse.bin
.
Reading and writing data using ClickHouse Cloud {#basic-examples-cloud}
With ClickHouse Cloud you are not able to upload a Protobuf schema file. However, you can use the
format_protobuf_schema
setting to specify the schema in the query. In this example, we show you how to read serialized data from your local
machine and insert it into a table in ClickHouse Cloud.
As in the previous example, create the table according to the schema of your Protobuf schema in ClickHouse Cloud:
sql
CREATE DATABASE IF NOT EXISTS test;
CREATE TABLE IF NOT EXISTS test.protobuf_messages (
name String,
surname String,
birthDate UInt32,
phoneNumbers Array(String)
)
ENGINE = MergeTree()
ORDER BY tuple()
The setting
format_schema_source
defines the source of setting
format_schema
Possible values:
- 'file' (default): unsupported in Cloud
- 'string': The
format_schema
is the literal content of the schema.
- 'query': The
format_schema
is a query to retrieve the schema.
format_schema_source='string'
{#format-schema-source-string}
Insert the data into ClickHouse Cloud, specifying the schema as a string, run:
bash
cat protobuf_messages.bin | clickhouse client --host <hostname> --secure --password <password> --query "INSERT INTO testing.protobuf_messages SETTINGS format_schema_source='syntax = "proto3";message MessageType { string name = 1; string surname = 2; uint32 birthDate = 3; repeated string phoneNumbers = 4;};', format_schema='schemafile:MessageType' FORMAT Protobuf"
Select the data inserted into the table:
sql
clickhouse client --host <hostname> --secure --password <password> --query "SELECT * FROM testing.protobuf_messages"
response
Aisha Khan 19920815 ['(555) 247-8903','(555) 612-3457']
Javier Rodriguez 20001015 ['(555) 891-2046','(555) 738-5129']
Mei Ling 19980616 ['(555) 956-1834','(555) 403-7682']
format_schema_source='query'
{#format-schema-source-query}
You can also store your Protobuf schema in a table.
Create a table on ClickHouse Cloud to insert data into:
sql
CREATE TABLE testing.protobuf_schema (
schema String
)
ENGINE = MergeTree()
ORDER BY tuple();
sql
INSERT INTO testing.protobuf_schema VALUES ('syntax = "proto3";message MessageType { string name = 1; string surname = 2; uint32 birthDate = 3; repeated string phoneNumbers = 4;};');
Insert the data into ClickHouse Cloud, specifying the schema as a query to run:
bash
cat protobuf_messages.bin | clickhouse client --host <hostname> --secure --password <password> --query "INSERT INTO testing.protobuf_messages SETTINGS format_schema_source='SELECT schema FROM testing.protobuf_schema', format_schema='schemafile:MessageType' FORMAT Protobuf" | {"source_file": "Protobuf.md"} | [
0.05537765100598335,
-0.03885558247566223,
-0.0342128686606884,
-0.01712682470679283,
-0.04817444086074829,
-0.059171002358198166,
0.04753340780735016,
0.020413566380739212,
-0.06406804174184799,
0.08139093965291977,
0.04412079602479935,
-0.09806369990110397,
0.06512705981731415,
-0.036695... |
fe87f9ee-f7d0-484d-b92d-2d3df3941328 | Select the data inserted into the table:
sql
clickhouse client --host <hostname> --secure --password <password> --query "SELECT * FROM testing.protobuf_messages"
response
Aisha Khan 19920815 ['(555) 247-8903','(555) 612-3457']
Javier Rodriguez 20001015 ['(555) 891-2046','(555) 738-5129']
Mei Ling 19980616 ['(555) 956-1834','(555) 403-7682']
Using autogenerated schema {#using-autogenerated-protobuf-schema}
If you don't have an external Protobuf schema for your data, you can still output/input data in the Protobuf format
using an autogenerated schema. For this use the
format_protobuf_use_autogenerated_schema
setting.
For example:
sql
SELECT * FROM test.hits format Protobuf SETTINGS format_protobuf_use_autogenerated_schema=1
In this case, ClickHouse will autogenerate the Protobuf schema according to the table structure using function
structureToProtobufSchema
. It will then use this schema to serialize data in the Protobuf format.
You can also read a Protobuf file with the autogenerated schema. In this case it is necessary for the file to be created using the same schema:
bash
$ cat hits.bin | clickhouse-client --query "INSERT INTO test.hits SETTINGS format_protobuf_use_autogenerated_schema=1 FORMAT Protobuf"
The setting
format_protobuf_use_autogenerated_schema
is enabled by default and applies if
format_schema
is not set.
You can also save autogenerated schema in the file during input/output using setting
output_format_schema
. For example:
sql
SELECT * FROM test.hits format Protobuf SETTINGS format_protobuf_use_autogenerated_schema=1, output_format_schema='path/to/schema/schema.proto'
In this case autogenerated Protobuf schema will be saved in file
path/to/schema/schema.capnp
.
Drop protobuf cache {#drop-protobuf-cache}
To reload the Protobuf schema loaded from
format_schema_path
use the
SYSTEM DROP ... FORMAT CACHE
statement.
sql
SYSTEM DROP FORMAT SCHEMA CACHE FOR Protobuf | {"source_file": "Protobuf.md"} | [
0.06674841791391373,
-0.013827741146087646,
-0.09584040194749832,
0.00009800406405702233,
-0.05943596363067627,
-0.03558649867773056,
0.04571857303380966,
-0.02190493419766426,
-0.05234310403466225,
0.05438822880387306,
0.03008999675512314,
-0.1633094698190689,
0.08102737367153168,
-0.0364... |
d771c669-6a1b-4301-a8fe-b449f1f3580d | alias: []
description: 'Documentation for the ProtobufList format'
input_format: true
keywords: ['ProtobufList']
output_format: true
slug: /interfaces/formats/ProtobufList
title: 'ProtobufList'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
| Input | Output | Alias |
|-------|--------|-------|
| β | β | |
Description {#description}
The
ProtobufList
format is similar to the
Protobuf
format but rows are represented as a sequence of sub-messages contained in a message with a fixed name of "Envelope".
Example usage {#example-usage}
For example:
sql
SELECT * FROM test.table FORMAT ProtobufList SETTINGS format_schema = 'schemafile:MessageType'
bash
cat protobuflist_messages.bin | clickhouse-client --query "INSERT INTO test.table FORMAT ProtobufList SETTINGS format_schema='schemafile:MessageType'"
Where the file
schemafile.proto
looks like this:
capnp title="schemafile.proto"
syntax = "proto3";
message Envelope {
message MessageType {
string name = 1;
string surname = 2;
uint32 birthDate = 3;
repeated string phoneNumbers = 4;
};
MessageType row = 1;
};
Format settings {#format-settings} | {"source_file": "ProtobufList.md"} | [
-0.004383751191198826,
-0.005938471760600805,
-0.023326726630330086,
-0.003202554304152727,
-0.01300029456615448,
-0.02346232905983925,
0.07987358421087265,
0.036564137786626816,
0.0075472514145076275,
0.03713025152683258,
-0.009888592176139355,
-0.12351708114147186,
0.1031385213136673,
0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.