id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
8e690c4b-14c5-4393-bbbb-66accfcc7062 | Now Grafana will see the results of the alias table instead of the results from
DESC example_table
:
Both types of aliasing can be used to perform complex type conversions or JSON field extraction.
All YAML options {#all-yaml-options}
These are all of the YAML configuration options made available by the plugin.
Some fields have example values while others simply show the field's type.
See
Grafana documentation
for more information on provisioning data sources with YAML.
yaml
datasources:
- name: Example ClickHouse
uid: clickhouse-example
type: grafana-clickhouse-datasource
jsonData:
host: 127.0.0.1
port: 9000
protocol: native
secure: false
username: default
tlsSkipVerify: <boolean>
tlsAuth: <boolean>
tlsAuthWithCACert: <boolean>
defaultDatabase: default
defaultTable: <string>
dialTimeout: 10
queryTimeout: 60
validateSql: false
httpHeaders:
- name: X-Example-Plain-Header
value: plain text value
secure: false
- name: X-Example-Secure-Header
secure: true
logs:
defaultDatabase: default
defaultTable: otel_logs
otelEnabled: false
otelVersion: latest
timeColumn: <string>
levelColumn: <string>
messageColumn: <string>
traces:
defaultDatabase: default
defaultTable: otel_traces
otelEnabled: false
otelVersion: latest
traceIdColumn: <string>
spanIdColumn: <string>
operationNameColumn: <string>
parentSpanIdColumn: <string>
serviceNameColumn: <string>
durationTimeColumn: <string>
durationUnitColumn: <time unit>
startTimeColumn: <string>
tagsColumn: <string>
serviceTagsColumn: <string>
secureJsonData:
tlsCACert: <string>
tlsClientCert: <string>
tlsClientKey: <string>
secureHttpHeaders.X-Example-Secure-Header: secure header value | {"source_file": "config.md"} | [
-0.10111738741397858,
0.011910910718142986,
-0.045331958681344986,
-0.009549847804009914,
-0.08327840268611908,
-0.08854611217975616,
-0.06032722070813179,
-0.03739726543426514,
-0.011242221109569073,
0.05352409929037094,
0.027951324358582497,
-0.060898952186107635,
-0.03177853301167488,
-... |
9f5509a3-2e7a-42b8-b514-3c8a6afbcda0 | sidebar_label: 'Query Builder'
sidebar_position: 2
slug: /integrations/grafana/query-builder
description: 'Using the Query Builder in the ClickHouse Grafana plugin'
title: 'Query Builder'
doc_type: 'guide'
keywords: ['grafana', 'query builder', 'visualization', 'dashboards', 'plugin']
import Image from '@theme/IdealImage';
import demo_table_query from '@site/static/images/integrations/data-visualization/grafana/demo_table_query.png';
import demo_logs_query from '@site/static/images/integrations/data-visualization/grafana/demo_logs_query.png';
import demo_logs_query_fields from '@site/static/images/integrations/data-visualization/grafana/demo_logs_query_fields.png';
import demo_time_series_query from '@site/static/images/integrations/data-visualization/grafana/demo_time_series_query.png';
import demo_trace_query from '@site/static/images/integrations/data-visualization/grafana/demo_trace_query.png';
import demo_raw_sql_query from '@site/static/images/integrations/data-visualization/grafana/demo_raw_sql_query.png';
import trace_id_in_table from '@site/static/images/integrations/data-visualization/grafana/trace_id_in_table.png';
import trace_id_in_logs from '@site/static/images/integrations/data-visualization/grafana/trace_id_in_logs.png';
import demo_data_links from '@site/static/images/integrations/data-visualization/grafana/demo_data_links.png';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Query Builder
Any query can be run with the ClickHouse plugin.
The query builder is a convenient option for simpler queries, but for complicated queries you will need to use the
SQL Editor
.
All queries in the query builder have a
query type
, and require at least one column to be selected.
The available query types are:
-
Table
: the simplest query type for showing data in table format. Works well as a catch-all for both simple and complex queries containing aggregate functions.
-
Logs
: optimized for building queries for logs. Works best in explore view with
defaults configured
.
-
Time Series
: best used for building time series queries. Allows selecting a dedicated time column and adding aggregate functions.
-
Traces
: optimized for searching/viewing traces. Works best in explore view with
defaults configured
.
-
SQL Editor
: the SQL Editor can be used when you want full control over the query. In this mode, any SQL query can be executed.
Query types {#query-types}
The
Query Type
setting will change the layout of the query builder to match the type of query being built.
The query type also determines which panel is used when visualizing data.
Table {#table}
The most flexible query type is the table query. This is a catch-all for the other query builders designed to handle simple and aggregate queries. | {"source_file": "query-builder.md"} | [
-0.048638541251420975,
0.058568213135004044,
-0.038518860936164856,
0.05437993258237839,
-0.061452411115169525,
-0.004407224711030722,
-0.007459629792720079,
0.05267786979675293,
-0.03731360659003258,
0.015193847008049488,
0.028439415618777275,
-0.0653594359755516,
0.07056548446416855,
-0.... |
496205a0-299a-4396-ab04-b6bbc128d89d | Table {#table}
The most flexible query type is the table query. This is a catch-all for the other query builders designed to handle simple and aggregate queries.
| Field | Description |
|----|----|
| Builder Mode | Simple queries exclude Aggregates and Group By, while aggregate queries include these options. |
| Columns | The selected columns. Raw SQL can be typed into this field to allow for functions and column aliasing. |
| Aggregates | A list of
aggregate functions
. Allows for custom values for function and column. Only visible in Aggregate mode. |
| Group By | A list of
GROUP BY
expressions. Only visible in Aggregate mode. |
| Order By | A list of
ORDER BY
expressions. |
| Limit | Appends a
LIMIT
statement to the end of the query. If set to
0
then it will be excluded. Some visualizations might need this set to
0
to show all the data. |
| Filters | A list of filters to be applied in the
WHERE
clause. |
This query type will render the data as a table.
Logs {#logs}
The logs query type offers a query builder focused on querying logs data.
Defaults can be configured in the data source's
log configuration
to allow the query builder to be pre-loaded with a default database/table and columns.
OpenTelemetry can also be enabled to auto select the columns according to a schema version.
Time
and
Level
filters are added by default, along with an Order By for the Time column.
These filters are tied to their respective fields, and will update as the columns are changed.
The
Level
filter is excluded from the SQL by default, changing it from the
IS ANYTHING
option will enable it.
The logs query type supports
data links
.
| Field | Description |
|----|----|
| Use OTel | Enables OpenTelemetry columns. Will overwrite the selected columns to use columns defined by the selected OTel schema version (Disables column selection). |
| Columns | Extra columns to be added to the log rows. Raw SQL can be typed into this field to allow for functions and column aliasing. |
| Time | The primary timestamp column for the log. Will display time-like types, but allows for custom values/functions. |
| Log Level | Optional. The
level
or
severity
of the log. Values typically look like
INFO
,
error
,
Debug
, etc. |
| Message | The log message content. |
| Order By | A list of
ORDER BY
expressions. |
| Limit | Appends a
LIMIT
statement to the end of the query. If set to
0
then it will be excluded, but this isn't recommended for large log datasets. |
| Filters | A list of filters to be applied in the
WHERE
clause. |
| Message Filter | A text input for conveniently filtering logs using a
LIKE %value%
. Excluded when input is empty. |
This query type will render the data in the logs panel along with a logs histogram panel at the top.
Extra columns that are selected in the query can be viewed in the expanded log row:
Time series {#time-series} | {"source_file": "query-builder.md"} | [
-0.01917649246752262,
0.00209156540222466,
-0.0024863029830157757,
0.09643885493278503,
-0.0021704440005123615,
-0.01535074133425951,
0.016996759921312332,
0.02109619416296482,
0.024535106495022774,
0.053694434463977814,
-0.00612984923645854,
0.012265895493328571,
0.03910267725586891,
0.01... |
e6e6b896-d394-4d8e-a790-b1287f77ad26 | Extra columns that are selected in the query can be viewed in the expanded log row:
Time series {#time-series}
The time series query type is similar to
table
, but with a focus on time series data.
The two views are mostly the same, with these notable differences:
- A dedicated
Time
field.
- In Aggregate mode, a time interval macro is automatically applied along with a Group By for the Time field.
- In Aggregate mode, the "Columns" field is hidden.
- A time range filter and Order By are automatically added for the
Time
field.
:::important Is your visualization missing data?
In some cases the time series panel will appear to be cut off because the limit defaults to
1000
.
Try removing the
LIMIT
clause by setting it to
0
(if your dataset allows).
:::
| Field | Description |
|----|----|
| Builder Mode | Simple queries exclude Aggregates and Group By, while aggregate queries include these options. |
| Time | The primary time column for the query. Will display time-like types, but allows for custom values/functions. |
| Columns | The selected columns. Raw SQL can be typed into this field to allow for functions and column aliasing. Only visible in Simple mode. |
| Aggregates | A list of
aggregate functions
. Allows for custom values for function and column. Only visible in Aggregate mode. |
| Group By | A list of
GROUP BY
expressions. Only visible in Aggregate mode. |
| Order By | A list of
ORDER BY
expressions. |
| Limit | Appends a
LIMIT
statement to the end of the query. If set to
0
then it will be excluded, this is recommended for some time series datasets in order to show the full visualization. |
| Filters | A list of filters to be applied in the
WHERE
clause. |
This query type will render the data with the time series panel.
Traces {#traces}
The trace query type offers a query builder for easily searching and viewing traces.
It is designed for OpenTelemetry data, but columns can be selected to render traces from a different schema.
Defaults can be configured in the data source's
trace configuration
to allow the query builder to be pre-loaded with a default database/table and columns. If defaults are configured, the column selection will be collapsed by default.
OpenTelemetry can also be enabled to auto select the columns according to a schema version.
Default filters are added with the intent to show only top level spans.
An Order By for the Time and Duration Time columns is also included.
These filters are tied to their respective fields, and will update as the columns are changed.
The
Service Name
filter is excluded from the SQL by default, changing it from the
IS ANYTHING
option will enable it.
The trace query type supports
data links
. | {"source_file": "query-builder.md"} | [
-0.022095656022429466,
-0.03239075094461441,
0.007855177856981754,
0.0763702243566513,
-0.014702466316521168,
0.006236612796783447,
-0.027436843141913414,
0.024137387052178383,
0.048905257135629654,
0.010050494223833084,
0.007351268548518419,
-0.05338384211063385,
-0.03164982795715332,
0.0... |
9cb5071d-4b1e-4b41-80b8-b56f55a5e45e | The trace query type supports
data links
.
| Field | Description |
|----|----|
| Trace Mode | Changes the query from Trace Search to Trace ID lookup. |
| Use OTel | Enables OpenTelemetry columns. Will overwrite the selected columns to use columns defined by the selected OTel schema version (Disables column selection). |
| Trace ID Column | The trace's ID. |
| Span ID Column | Span ID. |
| Parent Span ID Column | Parent span ID. This is usually empty for top level traces. |
| Service Name Column | Service name. |
| Operation Name Column | Operation name. |
| Start Time Column | The primary time column for the trace span. The time when the span started. |
| Duration Time Column | The duration of the span. By default Grafana expects this to be a float in milliseconds. A conversion is automatically applied via the
Duration Unit
dropdown. |
| Duration Unit | The unit of time used for the duration. Nanoseconds by default. The selected unit will be converted to a float in milliseconds as required by Grafana. |
| Tags Column | Span Tags. Exclude this if not using an OTel based schema as it expects a specific Map column type. |
| Service Tags Column | Service Tags. Exclude this if not using an OTel based schema as it expects a specific Map column type. |
| Order By | A list of
ORDER BY
expressions. |
| Limit | Appends a
LIMIT
statement to the end of the query. If set to
0
then it will be excluded, but this isn't recommended for large trace datasets. |
| Filters | A list of filters to be applied in the
WHERE
clause. |
| Trace ID | The Trace ID to filter by. Only used in Trace ID mode, and when opening a trace ID
data link
. |
This query type will render the data with the table view for Trace Search mode, and the trace panel for Trace ID mode.
SQL editor {#sql-editor}
For queries that are too complex for the query builder, you can use the SQL Editor.
This gives you full control over the query by allowing you to write and run plain ClickHouse SQL.
The SQL editor can be opened by selecting "SQL Editor" at the top of the query editor.
Macro functions
can still be used in this mode.
You can switch between query types to get a visualization that best fits your query.
This switch also has an effect even in dashboard view, notably with time series data.
Data links {#data-links}
Grafana
data links
can be used to link to new queries.
This feature has been enabled within the ClickHouse plugin for linking a trace to logs and vice versa. It works best with OpenTelemetry configured for both logs and traces in the
data source's config
Example of trace links in a table
Example of trace links in logs
How to make a data link {#how-to-make-a-data-link}
You can make a data link by selecting a column named
traceID
in your query. This name is case insensitive, and supports adding an underscore before the "ID". For example:
traceId
,
TraceId
,
TRACE_ID
, and
tracE_iD
would all be valid. | {"source_file": "query-builder.md"} | [
-0.09180589765310287,
-0.05956374108791351,
-0.08456944674253464,
0.04453536495566368,
-0.06838320940732956,
-0.08113797754049301,
-0.02736021764576435,
-0.011932672001421452,
-0.02563396282494068,
-0.009705672971904278,
0.022051431238651276,
-0.04222242534160614,
-0.006952309515327215,
-0... |
d3f58709-768c-44d9-8116-cb560f946008 | If OpenTelemetry is enabled in a
log
or
trace
query, a trace ID column will be included automatically.
By including a trace ID column, the "
View Trace
" and "
View Logs
" links will be attached to the data.
Linking abilities {#linking-abilities}
With the data links present, you can open traces and logs using the provided trace ID.
"
View Trace
" will open a split panel with the trace, and "
View Logs
" will open a logs query filtered by the trace ID.
If the link is clicked from a dashboard instead of the explore view, the link will be opened in a new tab in the explore view.
Having defaults configured for both
logs
and
traces
is required when crossing query types (logs to traces and traces to logs). Defaults are not required when opening a link of the same query type since the query can be simply copied.
Example of viewing a trace (right panel) from a logs query (left panel)
Macros {#macros}
Macros are a simple way to add dynamic SQL to your query.
Before a query gets sent to the ClickHouse server, the plugin will expand the macro and replace it will the full expression.
Queries from both the SQL Editor and Query Builder can use macros.
Using macros {#using-macros}
Macros can be included anywhere in the query, multiple times if needed.
Here is an example of using the
$__timeFilter
macro:
Input:
sql
SELECT log_time, log_message
FROM logs
WHERE $__timeFilter(log_time)
Final query output:
sql
SELECT log_time, log_message
FROM logs
WHERE log_time >= toDateTime(1415792726) AND log_time <= toDateTime(1447328726)
In this example, the Grafana dashboard's time range is applied to the
log_time
column.
The plugin also supports notation using braces
{}
. Use this notation when queries are needed inside
parameters
.
List of macros {#list-of-macros}
This is a list of all macros available in the plugin: | {"source_file": "query-builder.md"} | [
0.025415822863578796,
-0.08675207942724228,
-0.05050109326839447,
0.1119411289691925,
0.040575604885816574,
0.0013216338120400906,
0.04614570364356041,
-0.006032389122992754,
-0.009527930989861488,
-0.031624626368284225,
-0.005552250891923904,
0.0022768231574445963,
0.024098161607980728,
-... |
e1dbcdf5-0e85-48cd-a39d-47d091e91ea1 | | Macro | Description | Output example |
| -------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
|
$__dateFilter(columnName)
| Replaced by a time range filter on the provided column using the Grafana panel's time range as a
Date
. |
columnName >= toDate('2022-10-21') AND columnName <= toDate('2022-10-23')
|
|
$__timeFilter(columnName)
| Replaced by a time range filter on the provided column using the Grafana panel's time range as a
DateTime
. |
columnName >= toDateTime(1415792726) AND time <= toDateTime(1447328726)
|
|
$__timeFilter_ms(columnName)
| Replaced by a time range filter on the provided column using the Grafana panel's time range as a
DateTime64
. |
columnName >= fromUnixTimestamp64Milli(1415792726123) AND columnName <= fromUnixTimestamp64Milli(1447328726456)
|
|
$__dateTimeFilter(dateColumn, timeColumn)
| Shorthand that combines
$__dateFilter()
and
$__timeFilter()
using separate Date and DateTime columns. Alias
$__dt()
|
$__dateFilter(dateColumn) AND $__timeFilter(timeColumn)
|
|
$__fromTime
| Replaced by the starting time of the Grafana panel range cast to a
DateTime
. |
toDateTime(1415792726)
|
|
$__fromTime_ms
| Replaced by the starting time of the panel range cast to a
DateTime64
. |
fromUnixTimestamp64Milli(1415792726123)
|
|
$__toTime
| Replaced by the ending time of the Grafana panel range cast to a
DateTime
. |
toDateTime(1447328726)
|
|
$__toTime_ms
| Replaced by the ending time of the panel range cast to a
DateTime64 | {"source_file": "query-builder.md"} | [
0.0463205948472023,
0.06437702476978302,
-0.006618929095566273,
0.01438852772116661,
-0.03264516592025757,
0.07528968155384064,
0.03532673418521881,
0.05860888585448265,
0.026235060766339302,
-0.0348186120390892,
0.01559611689299345,
-0.08186733722686768,
0.00738520547747612,
-0.0477271936... |
a5a95081-9a86-4737-95eb-82aa897302c1 | |
$__toTime_ms
| Replaced by the ending time of the panel range cast to a
DateTime64
. |
fromUnixTimestamp64Milli(1447328726456)
|
|
$__timeInterval(columnName)
| Replaced by a function calculating the interval based on window size in seconds. |
toStartOfInterval(toDateTime(columnName), INTERVAL 20 second)
|
|
$__timeInterval_ms(columnName)
| Replaced by a function calculating the interval based on window size in milliseconds. |
toStartOfInterval(toDateTime64(columnName, 3), INTERVAL 20 millisecond)
|
|
$__interval_s
| Replaced by the dashboard interval in seconds. |
20
|
|
$__conditionalAll(condition, $templateVar)
| Replaced by the first parameter when the template variable in the second parameter does not select every value. Replaced by the 1=1 when the template variable selects every value. |
condition
or
1=1
| | {"source_file": "query-builder.md"} | [
0.04010468348860741,
0.02849976345896721,
-0.051090095192193985,
0.00929525401443243,
-0.015015932731330395,
0.06246240809559822,
0.003108585486188531,
0.08842765539884567,
-0.0071031940169632435,
0.02209365740418434,
-0.02238929644227028,
-0.10610245168209076,
0.010208048857748508,
-0.030... |
dee52084-ee6b-4b9b-be8b-4ba3d5b7948e | sidebar_label: 'Quick Start'
sidebar_position: 1
slug: /integrations/grafana
description: 'Introduction to using ClickHouse with Grafana'
title: 'ClickHouse data source plugin for Grafana'
show_related_blogs: true
doc_type: 'guide'
integration:
- support_level: 'partner'
- category: 'data_visualization'
- website: 'https://grafana.com/grafana/plugins/grafana-clickhouse-datasource/'
keywords: ['Grafana', 'data visualization', 'dashboard', 'plugin', 'data source']
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_native.md';
import search from '@site/static/images/integrations/data-visualization/grafana/search.png';
import install from '@site/static/images/integrations/data-visualization/grafana/install.png';
import add_new_ds from '@site/static/images/integrations/data-visualization/grafana/add_new_ds.png';
import quick_config from '@site/static/images/integrations/data-visualization/grafana/quick_config.png';
import valid_ds from '@site/static/images/integrations/data-visualization/grafana/valid_ds.png';
import Image from '@theme/IdealImage';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
ClickHouse data source plugin for Grafana
With Grafana you can explore and share all of your data through dashboards.
Grafana requires a plugin to connect to ClickHouse, which is easily installed within their UI.
1. Gather your connection details {#1-gather-your-connection-details}
2. Making a read-only user {#2-making-a-read-only-user}
When connecting ClickHouse to a data visualization tool like Grafana, it is recommended to make a read-only user to protect your data from unwanted modifications.
Grafana does not validate that queries are safe. Queries can contain any SQL statement, including
DELETE
and
INSERT
.
To configure a read-only user, follow these steps:
1. Create a
readonly
user profile following the
Creating Users and Roles in ClickHouse
guide.
2. Ensure the
readonly
user has enough permission to modify the
max_execution_time
setting required by the underlying
clickhouse-go client
.
3. If you're using a public ClickHouse instance, it is not recommended to set
readonly=2
in the
readonly
profile. Instead, leave
readonly=1
and set the constraint type of
max_execution_time
to
changeable_in_readonly
to allow modification of this setting.
3. Install the ClickHouse plugin for Grafana {#3--install-the-clickhouse-plugin-for-grafana}
Before Grafana can connect to ClickHouse, you need to install the appropriate Grafana plugin. Assuming you are logged in to Grafana, follow these steps:
From the
Connections
page in the sidebar, select the
Add new connection
tab.
Search for
ClickHouse
and click on the signed plugin by Grafana Labs:
On the next screen, click the
Install
button:
4. Define a ClickHouse data source {#4-define-a-clickhouse-data-source} | {"source_file": "index.md"} | [
-0.09041120111942291,
0.0035623875446617603,
-0.07097353041172028,
0.019235141575336456,
-0.0613352470099926,
-0.028556309640407562,
-0.008357150480151176,
0.010819785296916962,
-0.07914819568395615,
0.01808328367769718,
0.039939381182193756,
-0.045520566403865814,
0.03223831206560135,
-0.... |
9ad8c3f8-29dd-4b43-a387-3f06a34a7b1c | On the next screen, click the
Install
button:
4. Define a ClickHouse data source {#4-define-a-clickhouse-data-source}
Once the installation is complete, click the
Add new data source
button. (You can also add a data source from the
Data sources
tab on the
Connections
page.)
Either scroll down and find the
ClickHouse
data source type, or you can search for it in the search bar of the
Add data source
page. Select the
ClickHouse
data source and the following page will appear:
Enter your server settings and credentials. The key settings are:
Server host address:
the hostname of your ClickHouse service.
Server port:
the port for your ClickHouse service. Will be different depending on server configuration and protocol.
Protocol
the protocol used to connect to your ClickHouse service.
Secure connection
enable if your server requires a secure connection.
Username
and
Password
: enter your ClickHouse user credentials. If you have not configured any users, try
default
for the username. It is recommended to
configure a read-only user
.
For more settings, check the
plugin configuration
documentation.
Click the
Save & test
button to verify that Grafana can connect to your ClickHouse service. If successful, you will see a
Data source is working
message:
5. Next steps {#5-next-steps}
Your data source is now ready to use! Learn more about how to build queries with the
query builder
.
For more details on configuration, check the
plugin configuration
documentation.
If you're looking for more information that is not included in these docs, check the
plugin repository on GitHub
.
Upgrading plugin versions {#upgrading-plugin-versions}
Starting with v4, configurations and queries are able to be upgraded as new versions are released.
Configurations and queries from v3 are migrated to v4 as they are opened. While the old configurations and dashboards will load in v4, the migration is not persisted until they are saved again in the new version. If you notice any issues when opening an old configuration/query, discard your changes and
report the issue on GitHub
.
The plugin cannot downgrade to previous versions if the configuration/query was created with a newer version. | {"source_file": "index.md"} | [
-0.095210000872612,
-0.07360979169607162,
-0.14927588403224945,
-0.048701174557209015,
-0.11039425432682037,
-0.05627606809139252,
-0.035195596516132355,
-0.0815369263291359,
-0.042613208293914795,
0.019377848133444786,
0.036669109016656876,
-0.024735165759921074,
-0.030185949057340622,
0.... |
52dddd26-c5c3-4ef2-bd87-6cc6faf8bdf4 | sidebar_label: 'Go'
sidebar_position: 1
keywords: ['clickhouse', 'go', 'client', 'golang']
slug: /integrations/go
description: 'The Go clients for ClickHouse allows users to connect to ClickHouse using either the Go standard database/sql interface or an optimized native interface.'
title: 'ClickHouse Go'
doc_type: 'reference'
integration:
- support_level: 'core'
- category: 'language_client'
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_native.md';
ClickHouse Go
A simple example {#a-simple-example}
Let's Go with a simple example. This will connect to ClickHouse and select from the system database. To get started you will need your connection details.
Connection details {#connection-details}
Initialize a module {#initialize-a-module}
bash
mkdir clickhouse-golang-example
cd clickhouse-golang-example
go mod init clickhouse-golang-example
Copy in some sample code {#copy-in-some-sample-code}
Copy this code into the
clickhouse-golang-example
directory as
main.go
.
```go title=main.go
package main
import (
"context"
"crypto/tls"
"fmt"
"log"
"github.com/ClickHouse/clickhouse-go/v2"
"github.com/ClickHouse/clickhouse-go/v2/lib/driver"
)
func main() {
conn, err := connect()
if err != nil {
panic(err)
}
ctx := context.Background()
rows, err := conn.Query(ctx, "SELECT name, toString(uuid) as uuid_str FROM system.tables LIMIT 5")
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var name, uuid string
if err := rows.Scan(&name, &uuid); err != nil {
log.Fatal(err)
}
log.Printf("name: %s, uuid: %s", name, uuid)
}
}
func connect() (driver.Conn, error) {
var (
ctx = context.Background()
conn, err = clickhouse.Open(&clickhouse.Options{
Addr: []string{"
:9440"},
Auth: clickhouse.Auth{
Database: "default",
Username: "default",
Password: "
",
},
ClientInfo: clickhouse.ClientInfo{
Products: []struct {
Name string
Version string
}{
{Name: "an-example-go-client", Version: "0.1"},
},
},
Debugf: func(format string, v ...interface{}) {
fmt.Printf(format, v)
},
TLS: &tls.Config{
InsecureSkipVerify: true,
},
})
) | {"source_file": "index.md"} | [
0.02933814935386181,
-0.011773264966905117,
-0.09473123401403427,
0.0038542000111192465,
-0.11064542084932327,
0.03795555233955383,
0.06998895108699799,
0.052205830812454224,
-0.13273003697395325,
-0.06811246275901794,
0.029473448172211647,
-0.03644883632659912,
0.06598342210054398,
0.0238... |
905274f4-2faa-4aba-8716-9ede146e4c75 | if err != nil {
return nil, err
}
if err := conn.Ping(ctx); err != nil {
if exception, ok := err.(*clickhouse.Exception); ok {
fmt.Printf("Exception [%d] %s \n%s\n", exception.Code, exception.Message, exception.StackTrace)
}
return nil, err
}
return conn, nil
}
```
Run go mod tidy {#run-go-mod-tidy}
bash
go mod tidy
Set your connection details {#set-your-connection-details}
Earlier you looked up your connection details. Set them in
main.go
in the
connect()
function:
go
func connect() (driver.Conn, error) {
var (
ctx = context.Background()
conn, err = clickhouse.Open(&clickhouse.Options{
#highlight-next-line
Addr: []string{"<CLICKHOUSE_SECURE_NATIVE_HOSTNAME>:9440"},
Auth: clickhouse.Auth{
#highlight-start
Database: "default",
Username: "default",
Password: "<DEFAULT_USER_PASSWORD>",
#highlight-end
},
Run the example {#run-the-example}
bash
go run .
response
2023/03/06 14:18:33 name: COLUMNS, uuid: 00000000-0000-0000-0000-000000000000
2023/03/06 14:18:33 name: SCHEMATA, uuid: 00000000-0000-0000-0000-000000000000
2023/03/06 14:18:33 name: TABLES, uuid: 00000000-0000-0000-0000-000000000000
2023/03/06 14:18:33 name: VIEWS, uuid: 00000000-0000-0000-0000-000000000000
2023/03/06 14:18:33 name: hourly_data, uuid: a4e36bd4-1e82-45b3-be77-74a0fe65c52b
Learn more {#learn-more}
The rest of the documentation in this category covers the details of the ClickHouse Go client.
ClickHouse Go client {#clickhouse-go-client}
ClickHouse supports two official Go clients. These clients are complementary and intentionally support different use cases.
clickhouse-go
- High level language client which supports either the Go standard database/sql interface or the native interface.
ch-go
- Low level client. Native interface only.
clickhouse-go provides a high-level interface, allowing users to query and insert data using row-orientated semantics and batching that are lenient with respect to data types - values will be converted provided no precision loss is potentially incurred. ch-go, meanwhile, provides an optimized column-orientated interface that provides fast data block streaming with low CPU and memory overhead at the expense of type strictness and more complex usage.
From version 2.3, Clickhouse-go utilizes ch-go for low-level functions such as encoding, decoding, and compression. Note that clickhouse-go also supports the Go
database/sql
interface standard. Both clients use the native format for their encoding to provide optimal performance and can communicate over the native ClickHouse protocol. clickhouse-go also supports HTTP as its transport mechanism for cases where users have a requirement to proxy or load balance traffic.
When choosing a client library, users should be aware of their respective pros and cons - see Choosing a Client Library. | {"source_file": "index.md"} | [
0.027705945074558258,
0.022493980824947357,
-0.0592329204082489,
-0.006525856908410788,
-0.04032609984278679,
-0.06607135385274887,
0.03679738938808441,
0.0370384082198143,
-0.03158872202038765,
-0.06298309564590454,
0.020824506878852844,
-0.049132101237773895,
-0.05531872808933258,
0.0344... |
0975f6f2-af11-4122-882b-90315fc3f41b | When choosing a client library, users should be aware of their respective pros and cons - see Choosing a Client Library.
| | Native format | Native protocol | HTTP protocol | Row Orientated API | Column Orientated API | Type flexibility | Compression | Query Placeholders |
|:-------------:|:-------------:|:---------------:|:-------------:|:------------------:|:---------------------:|:----------------:|:-----------:|:------------------:|
| clickhouse-go | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| ch-go | ✅ | ✅ | | | ✅ | | ✅ | |
Choosing a client {#choosing-a-client}
Selecting a client library depends on your usage patterns and need for optimal performance. For insert heavy use cases, where millions of inserts are required per second, we recommend using the low level client
ch-go
. This client avoids the associated overhead of pivoting the data from a row-orientated format to columns, as the ClickHouse native format requires. Furthermore, it avoids any reflection or use of the
interface{}
(
any
) type to simplify usage.
For query workloads focused on aggregations or lower throughput insert workloads, the
clickhouse-go
provides a familiar
database/sql
interface and more straightforward row semantics. Users can also optionally use HTTP for the transport protocol and take advantage of helper functions to marshal rows to and from structs.
The clickhouse-go client {#the-clickhouse-go-client}
The clickhouse-go client provides two API interfaces for communicating with ClickHouse:
ClickHouse client-specific API
database/sql
standard - generic interface around SQL databases provided by Golang.
While the
database/sql
provides a database-agnostic interface, allowing developers to abstract their data store, it enforces some typing and query semantics that impact performance. For this reason, the client-specific API should be used where
performance is important
. However, users who wish to integrate ClickHouse into tooling, which supports multiple databases, may prefer to use the standard interface.
Both interfaces encode data using the
native format
and native protocol for communication. Additionally, the standard interface supports communication over HTTP. | {"source_file": "index.md"} | [
-0.10247790813446045,
0.07085651904344559,
-0.05938427150249481,
-0.013575919903814793,
-0.009987562894821167,
-0.022129632532596588,
-0.05854491889476776,
0.007456020452082157,
0.03781191259622574,
-0.026927107945084572,
-0.037697684019804,
0.019557103514671326,
0.044143516570329666,
0.01... |
440955d3-68bc-44b9-b293-c4a8ab64076c | Both interfaces encode data using the
native format
and native protocol for communication. Additionally, the standard interface supports communication over HTTP.
| | Native format | Native protocol | HTTP protocol | Bulk write support | Struct marshaling | Compression | Query Placeholders |
|:------------------:|:-------------:|:---------------:|:-------------:|:------------------:|:-----------------:|:-----------:|:------------------:|
| ClickHouse API | ✅ | ✅ | | ✅ | ✅ | ✅ | ✅ |
|
database/sql
API | ✅ | ✅ | ✅ | ✅ | | ✅ | ✅ |
Installation {#installation}
v1 of the driver is deprecated and will not reach feature updates or support for new ClickHouse types. Users should migrate to v2, which offers superior performance.
To install the 2.x version of the client, add the package to your go.mod file:
require github.com/ClickHouse/clickhouse-go/v2 main
Or, clone the repository:
bash
git clone --branch v2 https://github.com/clickhouse/clickhouse-go.git $GOPATH/src/github
To install another version, modify the path or the branch name accordingly.
```bash
mkdir my-clickhouse-app && cd my-clickhouse-app
cat > go.mod <<-END
module my-clickhouse-app
go 1.18
require github.com/ClickHouse/clickhouse-go/v2 main
END
cat > main.go <<-END
package main
import (
"fmt"
"github.com/ClickHouse/clickhouse-go/v2"
)
func main() {
conn, _ := clickhouse.Open(&clickhouse.Options{Addr: []string{"127.0.0.1:9000"}})
v, _ := conn.ServerVersion()
fmt.Println(v.String())
}
END
go mod tidy
go run main.go
```
Versioning & compatibility {#versioning--compatibility}
The client is released independently of ClickHouse. 2.x represents the current major under development. All versions of 2.x should be compatible with each other.
ClickHouse compatibility {#clickhouse-compatibility}
The client supports:
All currently supported versions of ClickHouse as recorded
here
. As ClickHouse versions are no longer supported they are also no longer actively tested against client releases.
All versions of ClickHouse 2 years from the release date of the client. Note only LTS versions are actively tested.
Golang compatibility {#golang-compatibility}
| Client Version | Golang Versions |
|:--------------:|:---------------:|
| => 2.0 <= 2.2 | 1.17, 1.18 |
| >= 2.3 | 1.18 |
ClickHouse client API {#clickhouse-client-api}
All code examples for the ClickHouse Client API can be found
here
.
Connecting {#connecting}
The following example, which returns the server version, demonstrates connecting to ClickHouse - assuming ClickHouse is not secured and accessible with the default user.
Note we use the default native port to connect. | {"source_file": "index.md"} | [
-0.042043134570121765,
-0.011354943737387657,
-0.10859565436840057,
-0.0008395086624659598,
-0.05921675264835358,
-0.0740339383482933,
-0.017913756892085075,
-0.025239737704396248,
-0.08937785029411316,
-0.08595722168684006,
0.03307635709643364,
-0.038917236030101776,
0.06386927515268326,
... |
1ab0084e-ed4d-4350-bcf5-fe941f07f829 | Note we use the default native port to connect.
go
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.Port)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
})
if err != nil {
return err
}
v, err := conn.ServerVersion()
fmt.Println(v)
Full Example
For all subsequent examples, unless explicitly shown, we assume the use of the ClickHouse
conn
variable has been created and is available.
Connection settings {#connection-settings}
When opening a connection, an Options struct can be used to control client behavior. The following settings are available:
Protocol
- either Native or HTTP. HTTP is only supported currently for the
database/sql API
.
TLS
- TLS options. A non-nil value enables TLS. See
Using TLS
.
Addr
- a slice of addresses including port.
Auth
- Authentication detail. See
Authentication
.
DialContext
- custom dial function to determine how connections are established.
Debug
- true/false to enable debugging.
Debugf
- provides a function to consume debug output. Requires
debug
to be set to true.
Settings
- map of ClickHouse settings. These will be applied to all ClickHouse queries.
Using Context
allows settings to be set per query.
Compression
- enable compression for blocks. See
Compression
.
DialTimeout
- the maximum time to establish a connection. Defaults to
1s
.
MaxOpenConns
- max connections for use at any time. More or fewer connections may be in the idle pool, but only this number can be used at any time. Defaults to
MaxIdleConns+5
.
MaxIdleConns
- number of connections to maintain in the pool. Connections will be reused if possible. Defaults to
5
.
ConnMaxLifetime
- maximum lifetime to keep a connection available. Defaults to 1hr. Connections are destroyed after this time, with new connections added to the pool as required.
ConnOpenStrategy
- determines how the list of node addresses should be consumed and used to open connections. See
Connecting to Multiple Nodes
.
BlockBufferSize
- maximum number of blocks to decode into the buffer at once. Larger values will increase parallelization at the expense of memory. Block sizes are query dependent so while you can set this on the connection, we recommend you override per query based on the data it returns. Defaults to
2
. | {"source_file": "index.md"} | [
0.0035053235478699207,
-0.01599114015698433,
-0.14720945060253143,
-0.03573094308376312,
-0.08942313492298126,
-0.04534660652279854,
-0.001539324875921011,
0.021011827513575554,
-0.042365144938230515,
-0.04737504944205284,
0.06358204036951065,
0.009601124562323093,
-0.006338471546769142,
0... |
c3aa680c-75f5-4895-8bb4-9f6f101fcf20 | go
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.Port)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
DialContext: func(ctx context.Context, addr string) (net.Conn, error) {
dialCount++
var d net.Dialer
return d.DialContext(ctx, "tcp", addr)
},
Debug: true,
Debugf: func(format string, v ...interface{}) {
fmt.Printf(format, v)
},
Settings: clickhouse.Settings{
"max_execution_time": 60,
},
Compression: &clickhouse.Compression{
Method: clickhouse.CompressionLZ4,
},
DialTimeout: time.Duration(10) * time.Second,
MaxOpenConns: 5,
MaxIdleConns: 5,
ConnMaxLifetime: time.Duration(10) * time.Minute,
ConnOpenStrategy: clickhouse.ConnOpenInOrder,
BlockBufferSize: 10,
})
if err != nil {
return err
}
Full Example
Connection pooling {#connection-pooling}
The client maintains a pool of connections, reusing these across queries as required. At most,
MaxOpenConns
will be used at any time, with the maximum pool size controlled by the
MaxIdleConns
. The client will acquire a connection from the pool for each query execution, returning it to the pool for reuse. A connection is used for the lifetime of a batch and released on
Send()
.
There is no guarantee the same connection in a pool will be used for subsequent queries unless the user sets
MaxOpenConns=1
. This is rarely needed but may be required for cases where users are using temporary tables.
Also, note that the
ConnMaxLifetime
is by default 1hr. This can lead to cases where the load to ClickHouse becomes unbalanced if nodes leave the cluster. This can occur when a node becomes unavailable, connections will balance to the other nodes. These connections will persist and not be refreshed for 1hr by default, even if the problematic node returns to the cluster. Consider lowering this value in heavy workload cases.
Using TLS {#using-tls}
At a low level, all client connect methods (
DSN/OpenDB/Open
) will use the
Go tls package
to establish a secure connection. The client knows to use TLS if the Options struct contains a non-nil
tls.Config
pointer. | {"source_file": "index.md"} | [
-0.01428529154509306,
0.010371439158916473,
-0.09965711086988449,
-0.03991790860891342,
-0.06641419976949692,
-0.0751820057630539,
-0.012017474509775639,
0.06859243661165237,
-0.012059852480888367,
-0.02713671699166298,
0.021495046094059944,
-0.032938402146101,
-0.045485444366931915,
0.035... |
e20eabcf-ce5d-445f-81f8-62b0fbe7be2b | go
env, err := GetNativeTestEnvironment()
if err != nil {
return err
}
cwd, err := os.Getwd()
if err != nil {
return err
}
t := &tls.Config{}
caCert, err := ioutil.ReadFile(path.Join(cwd, "../../tests/resources/CAroot.crt"))
if err != nil {
return err
}
caCertPool := x509.NewCertPool()
successful := caCertPool.AppendCertsFromPEM(caCert)
if !successful {
return err
}
t.RootCAs = caCertPool
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.SslPort)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
TLS: t,
})
if err != nil {
return err
}
v, err := conn.ServerVersion()
if err != nil {
return err
}
fmt.Println(v.String())
Full Example
This minimal
TLS.Config
is normally sufficient to connect to the secure native port (normally 9440) on a ClickHouse server. If the ClickHouse server does not have a valid certificate (expired, wrong hostname, not signed by a publicly recognized root Certificate Authority),
InsecureSkipVerify
can be true, but this is strongly discouraged.
go
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.SslPort)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
TLS: &tls.Config{
InsecureSkipVerify: true,
},
})
if err != nil {
return err
}
v, err := conn.ServerVersion()
Full Example
If additional TLS parameters are necessary, the application code should set the desired fields in the
tls.Config
struct. That can include specific cipher suites, forcing a particular TLS version (like 1.2 or 1.3), adding an internal CA certificate chain, adding a client certificate (and private key) if required by the ClickHouse server, and most of the other options that come with a more specialized security setup.
Authentication {#authentication}
Specify an Auth struct in the connection details to specify a username and password.
```go
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.Port)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
})
if err != nil {
return err
}
v, err := conn.ServerVersion()
```
Full Example
Connecting to multiple nodes {#connecting-to-multiple-nodes}
Multiple addresses can be specified via the
Addr
struct.
go
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{"127.0.0.1:9001", "127.0.0.1:9002", fmt.Sprintf("%s:%d", env.Host, env.Port)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
})
if err != nil {
return err
}
v, err := conn.ServerVersion()
if err != nil {
return err
}
fmt.Println(v.String())
Full Example | {"source_file": "index.md"} | [
0.010351181961596012,
0.06757372617721558,
-0.09809337556362152,
0.009889434091746807,
-0.004346109461039305,
-0.11016938835382462,
0.03306557610630989,
0.05936867371201515,
0.0010928751435130835,
-0.03469455614686012,
0.06197724863886833,
-0.15474270284175873,
0.023149680346250534,
0.0757... |
06ac1560-f40f-4357-a0a2-8cbe01d0133a | Full Example
Two connection strategies are available:
ConnOpenInOrder
(default) - addresses are consumed in order. Later addresses are only utilized in case of failure to connect using addresses earlier in the list. This is effectively a failure-over strategy.
ConnOpenRoundRobin
- Load is balanced across the addresses using a round-robin strategy.
This can be controlled through the option
ConnOpenStrategy
go
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{"127.0.0.1:9001", "127.0.0.1:9002", fmt.Sprintf("%s:%d", env.Host, env.Port)},
ConnOpenStrategy: clickhouse.ConnOpenRoundRobin,
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
})
if err != nil {
return err
}
v, err := conn.ServerVersion()
if err != nil {
return err
}
Full Example
Execution {#execution}
Arbitrary statements can be executed via the
Exec
method. This is useful for DDL and simple statements. It should not be used for larger inserts or query iterations.
go
conn.Exec(context.Background(), `DROP TABLE IF EXISTS example`)
err = conn.Exec(context.Background(), `
CREATE TABLE IF NOT EXISTS example (
Col1 UInt8,
Col2 String
) engine=Memory
`)
if err != nil {
return err
}
conn.Exec(context.Background(), "INSERT INTO example VALUES (1, 'test-1')")
Full Example
Note the ability to pass a Context to the query. This can be used to pass specific query level settings - see
Using Context
.
Batch Insert {#batch-insert}
To insert a large number of rows, the client provides batch semantics. This requires the preparation of a batch to which rows can be appended. This is finally sent via the
Send()
method. Batches are held in memory until
Send
is executed.
It is recommended to call
Close
on the batch to prevent leaking connections. This can be done via the
defer
keyword after preparing the batch. This will clean up the connection if
Send
never gets called. Note that this will result in 0 row inserts showing up in the query log if no rows were appended.
``go
conn, err := GetNativeConnection(nil, nil, nil)
if err != nil {
return err
}
ctx := context.Background()
defer func() {
conn.Exec(ctx, "DROP TABLE example")
}()
conn.Exec(context.Background(), "DROP TABLE IF EXISTS example")
err = conn.Exec(ctx,
CREATE TABLE IF NOT EXISTS example (
Col1 UInt8
, Col2 String
, Col3 FixedString(3)
, Col4 UUID
, Col5 Map(String, UInt8)
, Col6 Array(String)
, Col7 Tuple(String, UInt8, Array(Map(String, String)))
, Col8 DateTime
) Engine = Memory
`)
if err != nil {
return err
}
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close() | {"source_file": "index.md"} | [
0.012482287362217903,
-0.005614349618554115,
-0.1134883463382721,
0.021288609132170677,
-0.0731198713183403,
-0.052505385130643845,
0.008387643843889236,
-0.012318192049860954,
-0.02345358580350876,
0.0063097248785197735,
0.008310288190841675,
0.05895952507853508,
0.021342959254980087,
0.0... |
9a47e13b-2b68-4d37-a1fe-9493397eb49a | batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
for i := 0; i < 1000; i++ {
err := batch.Append(
uint8(42),
"ClickHouse",
"Inc",
uuid.New(),
map[string]uint8{"key": 1}, // Map(String, UInt8)
[]string{"Q", "W", "E", "R", "T", "Y"}, // Array(String)
[]interface{}{ // Tuple(String, UInt8, Array(Map(String, String)))
"String Value", uint8(5), []map[string]string{
{"key": "value"},
{"key": "value"},
{"key": "value"},
},
},
time.Now(),
)
if err != nil {
return err
}
}
return batch.Send()
```
Full Example
Recommendations for ClickHouse apply
here
. Batches should not be shared across go-routines - construct a separate batch per routine.
From the above example, note the need for variable types to align with the column type when appending rows. While the mapping is usually obvious, this interface tries to be flexible, and types will be converted provided no precision loss is incurred. For example, the following demonstrates inserting a string into a datetime64.
```go
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
for i := 0; i < 1000; i++ {
err := batch.Append(
"2006-01-02 15:04:05.999",
)
if err != nil {
return err
}
}
return batch.Send()
```
Full Example
For a full summary of supported go types for each column type, see
Type Conversions
.
Querying rows {#querying-rows}
Users can either query for a single row using the
QueryRow
method or obtain a cursor for iteration over a result set via
Query
. While the former accepts a destination for the data to be serialized into, the latter requires the call to
Scan
on each row.
go
row := conn.QueryRow(context.Background(), "SELECT * FROM example")
var (
col1 uint8
col2, col3, col4 string
col5 map[string]uint8
col6 []string
col7 []interface{}
col8 time.Time
)
if err := row.Scan(&col1, &col2, &col3, &col4, &col5, &col6, &col7, &col8); err != nil {
return err
}
fmt.Printf("row: col1=%d, col2=%s, col3=%s, col4=%s, col5=%v, col6=%v, col7=%v, col8=%v\n", col1, col2, col3, col4, col5, col6, col7, col8)
Full Example
go
rows, err := conn.Query(ctx, "SELECT Col1, Col2, Col3 FROM example WHERE Col1 >= 2")
if err != nil {
return err
}
for rows.Next() {
var (
col1 uint8
col2 string
col3 time.Time
)
if err := rows.Scan(&col1, &col2, &col3); err != nil {
return err
}
fmt.Printf("row: col1=%d, col2=%s, col3=%s\n", col1, col2, col3)
}
rows.Close()
return rows.Err()
Full Example | {"source_file": "index.md"} | [
-0.026033876463770866,
0.024075128138065338,
-0.03481355309486389,
0.027206484228372574,
-0.11952167749404907,
-0.045376844704151154,
0.10496600717306137,
-0.010847741737961769,
-0.09798358380794525,
-0.03503434360027313,
0.05725761130452156,
-0.031914420425891876,
0.061691541224718094,
-0... |
8e7f63ea-509a-4a77-9f53-8f553c349b94 | Full Example
Note in both cases, we are required to pass a pointer to the variables we wish to serialize the respective column values into. These must be passed in the order specified in the
SELECT
statement - by default, the order of column declaration will be used in the event of a
SELECT *
as shown above.
Similar to insertion, the Scan method requires the target variables to be of an appropriate type. This again aims to be flexible, with types converted where possible, provided no precision loss is possible, e.g., the above example shows a UUID column being read into a string variable. For a full list of supported go types for each Column type, see
Type Conversions
.
Finally, note the ability to pass a
Context
to the
Query
and
QueryRow
methods. This can be used for query level settings - see
Using Context
for further details.
Async Insert {#async-insert}
Asynchronous inserts are supported through the Async method. This allows the user to specify whether the client should wait for the server to complete the insert or respond once the data has been received. This effectively controls the parameter
wait_for_async_insert
.
go
conn, err := GetNativeConnection(nil, nil, nil)
if err != nil {
return err
}
ctx := context.Background()
if err := clickhouse_tests.CheckMinServerServerVersion(conn, 21, 12, 0); err != nil {
return nil
}
defer func() {
conn.Exec(ctx, "DROP TABLE example")
}()
conn.Exec(ctx, `DROP TABLE IF EXISTS example`)
const ddl = `
CREATE TABLE example (
Col1 UInt64
, Col2 String
, Col3 Array(UInt8)
, Col4 DateTime
) ENGINE = Memory
`
if err := conn.Exec(ctx, ddl); err != nil {
return err
}
for i := 0; i < 100; i++ {
if err := conn.AsyncInsert(ctx, fmt.Sprintf(`INSERT INTO example VALUES (
%d, '%s', [1, 2, 3, 4, 5, 6, 7, 8, 9], now()
)`, i, "Golang SQL database driver"), false); err != nil {
return err
}
}
Full Example
Columnar Insert {#columnar-insert}
Inserts can be inserted in column format. This can provide performance benefits if the data is already orientated in this structure by avoiding the need to pivot to rows.
```go
batch, err := conn.PrepareBatch(context.Background(), "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
var (
col1 []uint64
col2 []string
col3 [][]uint8
col4 []time.Time
)
for i := 0; i < 1_000; i++ {
col1 = append(col1, uint64(i))
col2 = append(col2, "Golang SQL database driver")
col3 = append(col3, []uint8{1, 2, 3, 4, 5, 6, 7, 8, 9})
col4 = append(col4, time.Now())
}
if err := batch.Column(0).Append(col1); err != nil {
return err
}
if err := batch.Column(1).Append(col2); err != nil {
return err
}
if err := batch.Column(2).Append(col3); err != nil {
return err
}
if err := batch.Column(3).Append(col4); err != nil {
return err
}
return batch.Send()
```
Full Example
Using structs {#using-structs} | {"source_file": "index.md"} | [
-0.027195360511541367,
0.02764158882200718,
-0.17251110076904297,
0.07790055125951767,
-0.07477673888206482,
0.009697736240923405,
0.06035579368472099,
0.03540607541799545,
-0.03929069638252258,
-0.05668863654136658,
0.02567516826093197,
0.026869257912039757,
0.06592537462711334,
-0.008910... |
46e21dbc-d47b-4639-93d5-33b3bfce7c5b | return batch.Send()
```
Full Example
Using structs {#using-structs}
For users, Golang structs provide a logical representation of a row of data in ClickHouse. To assist with this, the native interface provides several convenient functions.
Select with serialize {#select-with-serialize}
The Select method allows a set of response rows to be marshaled into a slice of structs with a single invocation.
``go
var result []struct {
Col1 uint8
Col2 string
ColumnWithName time.Time
ch:"Col3"`
}
if err = conn.Select(ctx, &result, "SELECT Col1, Col2, Col3 FROM example"); err != nil {
return err
}
for _, v := range result {
fmt.Printf("row: col1=%d, col2=%s, col3=%s\n", v.Col1, v.Col2, v.ColumnWithName)
}
```
Full Example
Scan struct {#scan-struct}
ScanStruct
allows the marshaling of a single Row from a query into a struct.
go
var result struct {
Col1 int64
Count uint64 `ch:"count"`
}
if err := conn.QueryRow(context.Background(), "SELECT Col1, COUNT() AS count FROM example WHERE Col1 = 5 GROUP BY Col1").ScanStruct(&result); err != nil {
return err
}
Full Example
Append struct {#append-struct}
AppendStruct
allows a struct to be appended to an existing
batch
and interpreted as a complete row. This requires the columns of the struct to align in both name and type with the table. While all columns must have an equivalent struct field, some struct fields may not have an equivalent column representation. These will simply be ignored.
```go
batch, err := conn.PrepareBatch(context.Background(), "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
for i := 0; i < 1_000; i++ {
err := batch.AppendStruct(&row{
Col1: uint64(i),
Col2: "Golang SQL database driver",
Col3: []uint8{1, 2, 3, 4, 5, 6, 7, 8, 9},
Col4: time.Now(),
ColIgnored: "this will be ignored",
})
if err != nil {
return err
}
}
```
Full Example
Type conversions {#type-conversions}
The client aims to be as flexible as possible concerning accepting variable types for both insertion and marshaling of responses. In most cases, an equivalent Golang type exists for a ClickHouse column type, e.g.,
UInt64
to
uint64
. These logical mappings should always be supported. Users may wish to utilize variable types that can be inserted into columns or used to receive a response if the conversion of either the variable or received data takes place first. The client aims to support these conversions transparently, so users do not need to convert their data to align precisely before insertion and to provide flexible marshaling at query time. This transparent conversion does not allow for precision loss. For example, a uint32 cannot be used to receive data from a UInt64 column. Conversely, a string can be inserted into a datetime64 field provided it meets the format requirements. | {"source_file": "index.md"} | [
0.021433202549815178,
0.016752948984503746,
-0.10973105579614639,
0.043815214186906815,
-0.05109797418117523,
-0.0911555215716362,
0.0954541489481926,
0.0652245357632637,
-0.10215144604444504,
-0.07247000187635422,
0.04475915804505348,
-0.007034833077341318,
-0.01499274093657732,
0.0336517... |
92909d04-bebf-4923-957c-26acb14c826f | The type conversions currently supported for primitive types are captured
here
.
This effort is ongoing and can be separated into insertion (
Append
/
AppendRow
) and read time (via a
Scan
). Should you need support for a specific conversion, please raise an issue.
Complex types {#complex-types}
Date/DateTime types {#datedatetime-types}
The ClickHouse go client supports the
Date
,
Date32
,
DateTime
, and
DateTime64
date/datetime types. Dates can be inserted as a string in the format
2006-01-02
or using the native go
time.Time{}
or
sql.NullTime
. DateTimes also support the latter types but require strings to be passed in the format
2006-01-02 15:04:05
with an optional timezone offset e.g.
2006-01-02 15:04:05 +08:00
.
time.Time{}
and
sql.NullTime
are both supported at read time as well as any implementation of of the
sql.Scanner
interface.
Handling of timezone information depends on the ClickHouse type and whether the value is being inserted or read:
DateTime/DateTime64
At
insert
time the value is sent to ClickHouse in UNIX timestamp format. If no time zone is provided, the client will assume the client's local time zone.
time.Time{}
or
sql.NullTime
will be converted to epoch accordingly.
At
select
time the timezone of the column will be used if set when returning a
time.Time
value. If not, the timezone of the server will be used.
Date/Date32
At
insert
time, the timezone of any date is considered when converting the date to a unix timestamp, i.e., it will be offset by the timezone prior to storage as a date, as Date types have no locale in ClickHouse. If this is not specified in a string value, the local timezone will be used.
At
select
time, dates are scanned into
time.Time{}
or
sql.NullTime{}
instances will be returned without timezone information.
Array {#array}
Arrays should be inserted as slices. Typing rules for the elements are consistent with those for the
primitive type
, i.e., where possible elements will be converted.
A pointer to a slice should be provided at Scan time.
```go
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
var i int64
for i = 0; i < 10; i++ {
err := batch.Append(
[]string{strconv.Itoa(int(i)), strconv.Itoa(int(i + 1)), strconv.Itoa(int(i + 2)), strconv.Itoa(int(i + 3))},
[][]int64{{i, i + 1}, {i + 2, i + 3}, {i + 4, i + 5}},
)
if err != nil {
return err
}
}
if err := batch.Send(); err != nil {
return err
}
var (
col1 []string
col2 [][]int64
)
rows, err := conn.Query(ctx, "SELECT * FROM example")
if err != nil {
return err
}
for rows.Next() {
if err := rows.Scan(&col1, &col2); err != nil {
return err
}
fmt.Printf("row: col1=%v, col2=%v\n", col1, col2)
}
rows.Close()
```
Full Example
Map {#map}
Maps should be inserted as Golang maps with keys and values conforming to the type rules defined
earlier
. | {"source_file": "index.md"} | [
0.05447590723633766,
0.005944777745753527,
-0.09706360101699829,
0.01763804256916046,
-0.05051379278302193,
0.011362305842339993,
0.01914202608168125,
0.03954727202653885,
-0.07642097026109695,
-0.028916098177433014,
-0.03057924285531044,
-0.05782131105661392,
0.01650054007768631,
0.019538... |
3c771201-da96-4e4a-85c7-7e1f0f9441b6 | Full Example
Map {#map}
Maps should be inserted as Golang maps with keys and values conforming to the type rules defined
earlier
.
```go
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
var i int64
for i = 0; i < 10; i++ {
err := batch.Append(
map[string]uint64{strconv.Itoa(int(i)): uint64(i)},
map[string][]string{strconv.Itoa(int(i)): {strconv.Itoa(int(i)), strconv.Itoa(int(i + 1)), strconv.Itoa(int(i + 2)), strconv.Itoa(int(i + 3))}},
map[string]map[string]uint64{strconv.Itoa(int(i)): {strconv.Itoa(int(i)): uint64(i)}},
)
if err != nil {
return err
}
}
if err := batch.Send(); err != nil {
return err
}
var (
col1 map[string]uint64
col2 map[string][]string
col3 map[string]map[string]uint64
)
rows, err := conn.Query(ctx, "SELECT * FROM example")
if err != nil {
return err
}
for rows.Next() {
if err := rows.Scan(&col1, &col2, &col3); err != nil {
return err
}
fmt.Printf("row: col1=%v, col2=%v, col3=%v\n", col1, col2, col3)
}
rows.Close()
```
Full Example
Tuples {#tuples}
Tuples represent a group of Columns of arbitrary length. The columns can either be explicitly named or only specify a type e.g.
```sql
//unnamed
Col1 Tuple(String, Int64)
//named
Col2 Tuple(name String, id Int64, age uint8)
```
Of these approaches, named tuples offer greater flexibility. While unnamed tuples must be inserted and read using slices, named tuples are also compatible with maps.
``go
if err = conn.Exec(ctx,
CREATE TABLE example (
Col1 Tuple(name String, age UInt8),
Col2 Tuple(String, UInt8),
Col3 Tuple(name String, id String)
)
Engine Memory
`); err != nil {
return err
}
defer func() {
conn.Exec(ctx, "DROP TABLE example")
}()
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
// both named and unnamed can be added with slices. Note we can use strongly typed lists and maps if all elements are the same type
if err = batch.Append([]interface{}{"Clicky McClickHouse", uint8(42)}, []interface{}{"Clicky McClickHouse Snr", uint8(78)}, []string{"Dale", "521211"}); err != nil {
return err
}
if err = batch.Append(map[string]interface{}{"name": "Clicky McClickHouse Jnr", "age": uint8(20)}, []interface{}{"Baby Clicky McClickHouse", uint8(1)}, map[string]string{"name": "Geoff", "id": "12123"}); err != nil {
return err
}
if err = batch.Send(); err != nil {
return err
}
var (
col1 map[string]interface{}
col2 []interface{}
col3 map[string]string
)
// named tuples can be retrieved into a map or slices, unnamed just slices
if err = conn.QueryRow(ctx, "SELECT * FROM example").Scan(&col1, &col2, &col3); err != nil {
return err
}
fmt.Printf("row: col1=%v, col2=%v, col3=%v\n", col1, col2, col3)
```
Full Example | {"source_file": "index.md"} | [
0.07012470066547394,
0.051135748624801636,
-0.017094753682613373,
-0.029130740091204643,
-0.13518498837947845,
0.008825501427054405,
0.028255226090550423,
0.044811833649873734,
-0.10640321671962738,
-0.049873825162649155,
0.07072976231575012,
-0.015825975686311722,
-0.019294319674372673,
-... |
e2f94e3d-4e49-47cf-a274-8c5b362d2579 | Full Example
Note: typed slices and maps are supported, provide the sub-columns in the named tuple are all of the same types.
Nested {#nested}
A Nested field is equivalent to an Array of named Tuples. Usage depends on whether the user has set
flatten_nested
to 1 or 0.
By setting flatten_nested to 0, Nested columns stay as a single array of tuples. This allows users to use slices of maps for insertion and retrieval and arbitrary levels of nesting. The map's key must equal the column's name, as shown in the example below.
Note: since the maps represent a tuple, they must be of the type
map[string]interface{}
. The values are currently not strongly typed.
``go
conn, err := GetNativeConnection(clickhouse.Settings{
"flatten_nested": 0,
}, nil, nil)
if err != nil {
return err
}
ctx := context.Background()
defer func() {
conn.Exec(ctx, "DROP TABLE example")
}()
conn.Exec(context.Background(), "DROP TABLE IF EXISTS example")
err = conn.Exec(ctx,
CREATE TABLE example (
Col1 Nested(Col1_1 String, Col1_2 UInt8),
Col2 Nested(
Col2_1 UInt8,
Col2_2 Nested(
Col2_2_1 UInt8,
Col2_2_2 UInt8
)
)
) Engine Memory
`)
if err != nil {
return err
}
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
var i int64
for i = 0; i < 10; i++ {
err := batch.Append(
[]map[string]interface{}{
{
"Col1_1": strconv.Itoa(int(i)),
"Col1_2": uint8(i),
},
{
"Col1_1": strconv.Itoa(int(i + 1)),
"Col1_2": uint8(i + 1),
},
{
"Col1_1": strconv.Itoa(int(i + 2)),
"Col1_2": uint8(i + 2),
},
},
[]map[string]interface{}{
{
"Col2_2": []map[string]interface{}{
{
"Col2_2_1": uint8(i),
"Col2_2_2": uint8(i + 1),
},
},
"Col2_1": uint8(i),
},
{
"Col2_2": []map[string]interface{}{
{
"Col2_2_1": uint8(i + 2),
"Col2_2_2": uint8(i + 3),
},
},
"Col2_1": uint8(i + 1),
},
},
)
if err != nil {
return err
}
}
if err := batch.Send(); err != nil {
return err
}
var (
col1 []map[string]interface{}
col2 []map[string]interface{}
)
rows, err := conn.Query(ctx, "SELECT * FROM example")
if err != nil {
return err
}
for rows.Next() {
if err := rows.Scan(&col1, &col2); err != nil {
return err
}
fmt.Printf("row: col1=%v, col2=%v\n", col1, col2)
}
rows.Close()
```
Full Example -
flatten_tested=0 | {"source_file": "index.md"} | [
0.02945834770798683,
0.014781526289880276,
-0.056001823395490646,
0.0019394056871533394,
-0.020590685307979584,
-0.04634776711463928,
0.03282848373055458,
0.027473313733935356,
-0.12005428969860077,
-0.017985733225941658,
0.024525802582502365,
-0.0722922682762146,
-0.01235067006200552,
0.0... |
823e1f81-3e89-46e9-a0e9-f6d7f3069d3b | Full Example -
flatten_tested=0
If the default value of 1 is used for
flatten_nested
, nested columns are flattened to separate arrays. This requires using nested slices for insertion and retrieval. While arbitrary levels of nesting may work, this is not officially supported.
``go
conn, err := GetNativeConnection(nil, nil, nil)
if err != nil {
return err
}
ctx := context.Background()
defer func() {
conn.Exec(ctx, "DROP TABLE example")
}()
conn.Exec(ctx, "DROP TABLE IF EXISTS example")
err = conn.Exec(ctx,
CREATE TABLE example (
Col1 Nested(Col1_1 String, Col1_2 UInt8),
Col2 Nested(
Col2_1 UInt8,
Col2_2 Nested(
Col2_2_1 UInt8,
Col2_2_2 UInt8
)
)
) Engine Memory
`)
if err != nil {
return err
}
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
var i uint8
for i = 0; i < 10; i++ {
col1_1_data := []string{strconv.Itoa(int(i)), strconv.Itoa(int(i + 1)), strconv.Itoa(int(i + 2))}
col1_2_data := []uint8{i, i + 1, i + 2}
col2_1_data := []uint8{i, i + 1, i + 2}
col2_2_data := [][][]interface{}{
{
{i, i + 1},
},
{
{i + 2, i + 3},
},
{
{i + 4, i + 5},
},
}
err := batch.Append(
col1_1_data,
col1_2_data,
col2_1_data,
col2_2_data,
)
if err != nil {
return err
}
}
if err := batch.Send(); err != nil {
return err
}
```
Full Example -
flatten_nested=1
Note: Nested columns must have the same dimensions. For example, in the above example,
Col_2_2
and
Col_2_1
must have the same number of elements.
Due to a more straightforward interface and official support for nesting, we recommend
flatten_nested=0
.
Geo types {#geo-types}
The client supports the geo types Point, Ring, Polygon, and Multi Polygon. These fields are in Golang using the package
github.com/paulmach/orb
.
``go
if err = conn.Exec(ctx,
CREATE TABLE example (
point Point,
ring Ring,
polygon Polygon,
mPolygon MultiPolygon
)
Engine Memory
`); err != nil {
return err
}
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close() | {"source_file": "index.md"} | [
0.015324954874813557,
-0.06344723701477051,
-0.06856007128953934,
0.06246855854988098,
-0.015864934772253036,
-0.09391729533672333,
0.02372748777270317,
0.07182369381189346,
-0.11440416425466537,
-0.02491012029349804,
0.03119581565260887,
-0.05614802613854408,
-0.018226031213998795,
0.0069... |
0bad8ac4-b2f7-4d5a-89b1-45553ebbeafd | batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
if err = batch.Append(
orb.Point{11, 22},
orb.Ring{
orb.Point{1, 2},
orb.Point{1, 2},
},
orb.Polygon{
orb.Ring{
orb.Point{1, 2},
orb.Point{12, 2},
},
orb.Ring{
orb.Point{11, 2},
orb.Point{1, 12},
},
},
orb.MultiPolygon{
orb.Polygon{
orb.Ring{
orb.Point{1, 2},
orb.Point{12, 2},
},
orb.Ring{
orb.Point{11, 2},
orb.Point{1, 12},
},
},
orb.Polygon{
orb.Ring{
orb.Point{1, 2},
orb.Point{12, 2},
},
orb.Ring{
orb.Point{11, 2},
orb.Point{1, 12},
},
},
},
); err != nil {
return err
}
if err = batch.Send(); err != nil {
return err
}
var (
point orb.Point
ring orb.Ring
polygon orb.Polygon
mPolygon orb.MultiPolygon
)
if err = conn.QueryRow(ctx, "SELECT * FROM example").Scan(&point, &ring, &polygon, &mPolygon); err != nil {
return err
}
```
Full Example
UUID {#uuid}
The UUID type is supported by the
github.com/google/uuid
package. Users can also send and marshal a UUID as a string or any type which implements
sql.Scanner
or
Stringify
.
``go
if err = conn.Exec(ctx,
CREATE TABLE example (
col1 UUID,
col2 UUID
)
Engine Memory
`); err != nil {
return err
}
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
col1Data, _ := uuid.NewUUID()
if err = batch.Append(
col1Data,
"603966d6-ed93-11ec-8ea0-0242ac120002",
); err != nil {
return err
}
if err = batch.Send(); err != nil {
return err
}
var (
col1 uuid.UUID
col2 uuid.UUID
)
if err = conn.QueryRow(ctx, "SELECT * FROM example").Scan(&col1, &col2); err != nil {
return err
}
```
Full Example
Decimal {#decimal}
The Decimal type is supported by
github.com/shopspring/decimal
package.
``go
if err = conn.Exec(ctx,
CREATE TABLE example (
Col1 Decimal32(3),
Col2 Decimal(18,6),
Col3 Decimal(15,7),
Col4 Decimal128(8),
Col5 Decimal256(9)
) Engine Memory
`); err != nil {
return err
}
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
if err = batch.Append(
decimal.New(25, 4),
decimal.New(30, 5),
decimal.New(35, 6),
decimal.New(135, 7),
decimal.New(256, 8),
); err != nil {
return err
}
if err = batch.Send(); err != nil {
return err
}
var (
col1 decimal.Decimal
col2 decimal.Decimal
col3 decimal.Decimal
col4 decimal.Decimal
col5 decimal.Decimal
) | {"source_file": "index.md"} | [
0.023549439385533333,
0.016673492267727852,
-0.04227771610021591,
-0.02127949893474579,
-0.034248512238264084,
-0.08577936142683029,
0.10728122293949127,
-0.02346721477806568,
-0.03089054301381111,
-0.044536370784044266,
0.029122713953256607,
-0.01135391928255558,
0.04669102653861046,
-0.0... |
cce55273-a791-4f1a-bce7-c91d6c0bb812 | if err = batch.Send(); err != nil {
return err
}
var (
col1 decimal.Decimal
col2 decimal.Decimal
col3 decimal.Decimal
col4 decimal.Decimal
col5 decimal.Decimal
)
if err = conn.QueryRow(ctx, "SELECT * FROM example").Scan(&col1, &col2, &col3, &col4, &col5); err != nil {
return err
}
fmt.Printf("col1=%v, col2=%v, col3=%v, col4=%v, col5=%v\n", col1, col2, col3, col4, col5)
```
Full Example
Nullable {#nullable}
The go value of Nil represents a ClickHouse NULL. This can be used if a field is declared Nullable. At insert time, Nil can be passed for both the normal and Nullable version of a column. For the former, the default value for the type will be persisted, e.g., an empty string for string. For the nullable version, a NULL value will be stored in ClickHouse.
At Scan time, the user must pass a pointer to a type that supports nil, e.g.,
string, in order to represent the nil value for a Nullable field. In the example below, col1, which is a Nullable(String), thus receives a
*string. This allows nil to be represented.
``go
if err = conn.Exec(ctx,
CREATE TABLE example (
col1 Nullable(String),
col2 String,
col3 Nullable(Int8),
col4 Nullable(Int64)
)
Engine Memory
`); err != nil {
return err
}
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
if err = batch.Append(
nil,
nil,
nil,
sql.NullInt64{Int64: 0, Valid: false},
); err != nil {
return err
}
if err = batch.Send(); err != nil {
return err
}
var (
col1
string
col2 string
col3
int8
col4 sql.NullInt64
)
if err = conn.QueryRow(ctx, "SELECT * FROM example").Scan(&col1, &col2, &col3, &col4); err != nil {
return err
}
```
Full Example
The client additionally supports the
sql.Null*
types e.g.
sql.NullInt64
. These are compatible with their equivalent ClickHouse types.
Big Ints - Int128, Int256, UInt128, UInt256 {#big-ints---int128-int256-uint128-uint256}
Number types larger than 64 bits are represented using the native go
big
package.
``go
if err = conn.Exec(ctx,
CREATE TABLE example (
Col1 Int128,
Col2 UInt128,
Col3 Array(Int128),
Col4 Int256,
Col5 Array(Int256),
Col6 UInt256,
Col7 Array(UInt256)
) Engine Memory`); err != nil {
return err
}
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close() | {"source_file": "index.md"} | [
0.0033194595016539097,
0.015889709815382957,
-0.12640489637851715,
0.08134742081165314,
-0.06568832695484161,
-0.05177063122391701,
0.052054956555366516,
0.05106634646654129,
-0.09174322336912155,
-0.03336142748594284,
0.018789179623126984,
-0.06009935587644577,
-0.024705585092306137,
-0.0... |
81c799d9-834f-462d-bf40-18e0371caa4c | batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
col1Data, _ := new(big.Int).SetString("170141183460469231731687303715884105727", 10)
col2Data := big.NewInt(128)
col3Data := []
big.Int{
big.NewInt(-128),
big.NewInt(128128),
big.NewInt(128128128),
}
col4Data := big.NewInt(256)
col5Data := []
big.Int{
big.NewInt(256),
big.NewInt(256256),
big.NewInt(256256256256),
}
col6Data := big.NewInt(256)
col7Data := []*big.Int{
big.NewInt(256),
big.NewInt(256256),
big.NewInt(256256256256),
}
if err = batch.Append(col1Data, col2Data, col3Data, col4Data, col5Data, col6Data, col7Data); err != nil {
return err
}
if err = batch.Send(); err != nil {
return err
}
var (
col1 big.Int
col2 big.Int
col3 []
big.Int
col4 big.Int
col5 []
big.Int
col6 big.Int
col7 []*big.Int
)
if err = conn.QueryRow(ctx, "SELECT * FROM example").Scan(&col1, &col2, &col3, &col4, &col5, &col6, &col7); err != nil {
return err
}
fmt.Printf("col1=%v, col2=%v, col3=%v, col4=%v, col5=%v, col6=%v, col7=%v\n", col1, col2, col3, col4, col5, col6, col7)
```
Full Example
Compression {#compression}
Support for compression methods depends on the underlying protocol in use. For the native protocol, the client supports
LZ4
and
ZSTD
compression. This is performed at a block level only. Compression can be enabled by including a
Compression
configuration with the connection.
``go
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.Port)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
Compression: &clickhouse.Compression{
Method: clickhouse.CompressionZSTD,
},
MaxOpenConns: 1,
})
ctx := context.Background()
defer func() {
conn.Exec(ctx, "DROP TABLE example")
}()
conn.Exec(context.Background(), "DROP TABLE IF EXISTS example")
if err = conn.Exec(ctx,
CREATE TABLE example (
Col1 Array(String)
) Engine Memory
`); err != nil {
return err
}
batch, err := conn.PrepareBatch(ctx, "INSERT INTO example")
if err != nil {
return err
}
defer batch.Close()
for i := 0; i < 1000; i++ {
if err := batch.Append([]string{strconv.Itoa(i), strconv.Itoa(i + 1), strconv.Itoa(i + 2), strconv.Itoa(i + 3)}); err != nil {
return err
}
}
if err := batch.Send(); err != nil {
return err
}
```
Full Example
Additional compression techniques are available if using the standard interface over HTTP. See
database/sql API - Compression
for further details.
Parameter binding {#parameter-binding}
The client supports parameter binding for the
Exec
,
Query
, and
QueryRow
methods. As shown in the example below, this is supported using named, numbered, and positional parameters. We provide examples of these below. | {"source_file": "index.md"} | [
0.022855505347251892,
-0.021618790924549103,
-0.04279362037777901,
-0.003643104573711753,
-0.0962074026465416,
-0.08235635608434677,
-0.03219741955399513,
0.031040582805871964,
-0.12010084092617035,
0.004326505120843649,
-0.0016071520512923598,
-0.039485443383455276,
0.0012269065482541919,
... |
19104ff4-a6dc-46b7-bc18-9a48d196ac73 | go
var count uint64
// positional bind
if err = conn.QueryRow(ctx, "SELECT count() FROM example WHERE Col1 >= ? AND Col3 < ?", 500, now.Add(time.Duration(750)*time.Second)).Scan(&count); err != nil {
return err
}
// 250
fmt.Printf("Positional bind count: %d\n", count)
// numeric bind
if err = conn.QueryRow(ctx, "SELECT count() FROM example WHERE Col1 <= $2 AND Col3 > $1", now.Add(time.Duration(150)*time.Second), 250).Scan(&count); err != nil {
return err
}
// 100
fmt.Printf("Numeric bind count: %d\n", count)
// named bind
if err = conn.QueryRow(ctx, "SELECT count() FROM example WHERE Col1 <= @col1 AND Col3 > @col3", clickhouse.Named("col1", 100), clickhouse.Named("col3", now.Add(time.Duration(50)*time.Second))).Scan(&count); err != nil {
return err
}
// 50
fmt.Printf("Named bind count: %d\n", count)
Full Example
Special cases {#special-cases}
By default, slices will be unfolded into a comma-separated list of values if passed as a parameter to a query. If users require a set of values to be injected with wrapping
[ ]
,
ArraySet
should be used.
If groups/tuples are required, with wrapping
( )
e.g., for use with IN operators, users can use a
GroupSet
. This is particularly useful for cases where multiple groups are required, as shown in the example below.
Finally, DateTime64 fields require precision in order to ensure parameters are rendered appropriately. The precision level for the field is unknown by the client, however, so the user must provide it. To facilitate this, we provide the
DateNamed
parameter.
go
var count uint64
// arrays will be unfolded
if err = conn.QueryRow(ctx, "SELECT count() FROM example WHERE Col1 IN (?)", []int{100, 200, 300, 400, 500}).Scan(&count); err != nil {
return err
}
fmt.Printf("Array unfolded count: %d\n", count)
// arrays will be preserved with []
if err = conn.QueryRow(ctx, "SELECT count() FROM example WHERE Col4 = ?", clickhouse.ArraySet{300, 301}).Scan(&count); err != nil {
return err
}
fmt.Printf("Array count: %d\n", count)
// Group sets allow us to form ( ) lists
if err = conn.QueryRow(ctx, "SELECT count() FROM example WHERE Col1 IN ?", clickhouse.GroupSet{[]interface{}{100, 200, 300, 400, 500}}).Scan(&count); err != nil {
return err
}
fmt.Printf("Group count: %d\n", count)
// More useful when we need nesting
if err = conn.QueryRow(ctx, "SELECT count() FROM example WHERE (Col1, Col5) IN (?)", []clickhouse.GroupSet{{[]interface{}{100, 101}}, {[]interface{}{200, 201}}}).Scan(&count); err != nil {
return err
}
fmt.Printf("Group count: %d\n", count)
// Use DateNamed when you need a precision in your time#
if err = conn.QueryRow(ctx, "SELECT count() FROM example WHERE Col3 >= @col3", clickhouse.DateNamed("col3", now.Add(time.Duration(500)*time.Millisecond), clickhouse.NanoSeconds)).Scan(&count); err != nil {
return err
}
fmt.Printf("NamedDate count: %d\n", count)
Full Example
Using context {#using-context} | {"source_file": "index.md"} | [
0.06351649761199951,
-0.038680631667375565,
-0.06859367340803146,
0.03331071138381958,
-0.09938305616378784,
-0.01181596890091896,
0.04316822066903114,
0.05601617321372032,
0.013871475122869015,
-0.03398120403289795,
0.025354323908686638,
-0.032985687255859375,
0.061499692499637604,
-0.048... |
a66db880-a057-40d4-82c6-8742b485a333 | Full Example
Using context {#using-context}
Go contexts provide a means of passing deadlines, cancellation signals, and other request-scoped values across API boundaries. All methods on a connection accept a context as their first variable. While previous examples used context.Background(), users can use this capability to pass settings and deadlines and to cancel queries.
Passing a context created
withDeadline
allows execution time limits to be placed on queries. Note this is an absolute time and expiry will only release the connection and send a cancel signal to ClickHouse.
WithCancel
can alternatively be used to cancel a query explicitly.
The helpers
clickhouse.WithQueryID
and
clickhouse.WithQuotaKey
allow a query id and quota key to be specified. Query ids can be useful for tracking queries in logs and for cancellation purposes. A quota key can be used to impose limits on ClickHouse usage based on a unique key value - see
Quotas Management
for further details.
Users can also use the context to ensure a setting is only applied for a specific query - rather than for the entire connection, as shown in
Connection Settings
.
Finally, users can control the size of the block buffer via the
clickhouse.WithBlockSize
. This overrides the connection level setting
BlockBufferSize
and controls the maximum number of blocks that are decoded and held in memory at any time. Larger values potentially mean more parallelization at the expense of memory.
Examples of the above are shown below.
```go
dialCount := 0
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.Port)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
DialContext: func(ctx context.Context, addr string) (net.Conn, error) {
dialCount++
var d net.Dialer
return d.DialContext(ctx, "tcp", addr)
},
})
if err != nil {
return err
}
if err := clickhouse_tests.CheckMinServerServerVersion(conn, 22, 6, 1); err != nil {
return nil
}
// we can use context to pass settings to a specific API call
ctx := clickhouse.Context(context.Background(), clickhouse.WithSettings(clickhouse.Settings{
"allow_experimental_object_type": "1",
}))
conn.Exec(ctx, "DROP TABLE IF EXISTS example")
// to create a JSON column we need allow_experimental_object_type=1
if err = conn.Exec(ctx,
CREATE TABLE example (
Col1 JSON
)
Engine Memory
); err != nil {
return err
}
// queries can be cancelled using the context
ctx, cancel := context.WithCancel(context.Background())
go func() {
cancel()
}()
if err = conn.QueryRow(ctx, "SELECT sleep(3)").Scan(); err == nil {
return fmt.Errorf("expected cancel")
} | {"source_file": "index.md"} | [
-0.07447312772274017,
0.053038280457258224,
-0.09445880353450775,
0.0011893005575984716,
-0.10645310580730438,
-0.01145562157034874,
0.11256017535924911,
0.007795796263962984,
0.010263302363455296,
-0.02567017823457718,
-0.000204453244805336,
-0.03900031000375748,
0.05643922835588455,
-0.0... |
95175900-40bc-480b-b5dd-aef91c6802ab | // set a deadline for a query - this will cancel the query after the absolute time is reached.
// queries will continue to completion in ClickHouse
ctx, cancel = context.WithDeadline(context.Background(), time.Now().Add(-time.Second))
defer cancel()
if err := conn.Ping(ctx); err == nil {
return fmt.Errorf("expected deadline exceeeded")
}
// set a query id to assist tracing queries in logs e.g. see system.query_log
var one uint8
queryId, _ := uuid.NewUUID()
ctx = clickhouse.Context(context.Background(), clickhouse.WithQueryID(queryId.String()))
if err = conn.QueryRow(ctx, "SELECT 1").Scan(&one); err != nil {
return err
}
conn.Exec(context.Background(), "DROP QUOTA IF EXISTS foobar")
defer func() {
conn.Exec(context.Background(), "DROP QUOTA IF EXISTS foobar")
}()
ctx = clickhouse.Context(context.Background(), clickhouse.WithQuotaKey("abcde"))
// set a quota key - first create the quota
if err = conn.Exec(ctx, "CREATE QUOTA IF NOT EXISTS foobar KEYED BY client_key FOR INTERVAL 1 minute MAX queries = 5 TO default"); err != nil {
return err
}
type Number struct {
Number uint64
ch:"number"
}
for i := 1; i <= 6; i++ {
var result []Number
if err = conn.Select(ctx, &result, "SELECT number FROM numbers(10)"); err != nil {
return err
}
}
```
Full Example
Progress/profile/log information {#progressprofilelog-information}
Progress, Profile, and Log information can be requested on queries. Progress information will report statistics on the number of rows and bytes that have been read and processed in ClickHouse. Conversely, Profile information provides a summary of data returned to the client, including totals of bytes (uncompressed), rows, and blocks. Finally, log information provides statistics on threads, e.g., memory usage and data speed.
Obtaining this information requires the user to use
Context
, to which the user can pass call-back functions.
```go
totalRows := uint64(0)
// use context to pass a call back for progress and profile info
ctx := clickhouse.Context(context.Background(), clickhouse.WithProgress(func(p
clickhouse.Progress) {
fmt.Println("progress: ", p)
totalRows += p.Rows
}), clickhouse.WithProfileInfo(func(p
clickhouse.ProfileInfo) {
fmt.Println("profile info: ", p)
}), clickhouse.WithLogs(func(log *clickhouse.Log) {
fmt.Println("log info: ", log)
}))
rows, err := conn.Query(ctx, "SELECT number from numbers(1000000) LIMIT 1000000")
if err != nil {
return err
}
for rows.Next() {
}
fmt.Printf("Total Rows: %d\n", totalRows)
rows.Close()
```
Full Example
Dynamic scanning {#dynamic-scanning} | {"source_file": "index.md"} | [
-0.019716624170541763,
0.022941388189792633,
-0.017239956185221672,
0.05916881561279297,
-0.02019413746893406,
-0.0988713949918747,
0.060401059687137604,
-0.02699941396713257,
-0.006485366728156805,
-0.008610839024186134,
-0.0018992341356351972,
-0.04463280364871025,
-0.017245711758732796,
... |
ef25dfaf-d6ca-4a7e-a7ff-21e66e2a0f79 | fmt.Printf("Total Rows: %d\n", totalRows)
rows.Close()
```
Full Example
Dynamic scanning {#dynamic-scanning}
Users may need to read tables for which they do not know the schema or type of the fields being returned. This is common in cases where ad-hoc data analysis is performed or generic tooling is written. To achieve this, column-type information is available on query responses. This can be used with Go reflection to create runtime instances of correctly typed variables which can be passed to Scan.
go
const query = `
SELECT
1 AS Col1
, 'Text' AS Col2
`
rows, err := conn.Query(context.Background(), query)
if err != nil {
return err
}
var (
columnTypes = rows.ColumnTypes()
vars = make([]interface{}, len(columnTypes))
)
for i := range columnTypes {
vars[i] = reflect.New(columnTypes[i].ScanType()).Interface()
}
for rows.Next() {
if err := rows.Scan(vars...); err != nil {
return err
}
for _, v := range vars {
switch v := v.(type) {
case *string:
fmt.Println(*v)
case *uint8:
fmt.Println(*v)
}
}
}
Full Example
External tables {#external-tables}
External tables
allow the client to send data to ClickHouse, with a SELECT query. This data is put in a temporary table and can be used in the query itself for evaluation.
To send external data to the client with a query, the user must build an external table via
ext.NewTable
before passing this via the context.
```go
table1, err := ext.NewTable("external_table_1",
ext.Column("col1", "UInt8"),
ext.Column("col2", "String"),
ext.Column("col3", "DateTime"),
)
if err != nil {
return err
}
for i := 0; i < 10; i++ {
if err = table1.Append(uint8(i), fmt.Sprintf("value_%d", i), time.Now()); err != nil {
return err
}
}
table2, err := ext.NewTable("external_table_2",
ext.Column("col1", "UInt8"),
ext.Column("col2", "String"),
ext.Column("col3", "DateTime"),
)
for i := 0; i < 10; i++ {
table2.Append(uint8(i), fmt.Sprintf("value_%d", i), time.Now())
}
ctx := clickhouse.Context(context.Background(),
clickhouse.WithExternalTable(table1, table2),
)
rows, err := conn.Query(ctx, "SELECT * FROM external_table_1")
if err != nil {
return err
}
for rows.Next() {
var (
col1 uint8
col2 string
col3 time.Time
)
rows.Scan(&col1, &col2, &col3)
fmt.Printf("col1=%d, col2=%s, col3=%v\n", col1, col2, col3)
}
rows.Close() | {"source_file": "index.md"} | [
0.04659042879939079,
0.06082179769873619,
-0.09602165967226028,
0.12619540095329285,
-0.03290199488401413,
-0.07608916610479355,
0.09968782216310501,
0.07577106356620789,
-0.08686578273773193,
-0.05839169770479202,
-0.0064473142847418785,
0.007447259500622749,
-0.03199922293424606,
0.00436... |
0872eec0-e485-4769-bd62-f7dc6b8cd7f7 | var count uint64
if err := conn.QueryRow(ctx, "SELECT COUNT(
) FROM external_table_1").Scan(&count); err != nil {
return err
}
fmt.Printf("external_table_1: %d\n", count)
if err := conn.QueryRow(ctx, "SELECT COUNT(
) FROM external_table_2").Scan(&count); err != nil {
return err
}
fmt.Printf("external_table_2: %d\n", count)
if err := conn.QueryRow(ctx, "SELECT COUNT(*) FROM (SELECT * FROM external_table_1 UNION ALL SELECT * FROM external_table_2)").Scan(&count); err != nil {
return err
}
fmt.Printf("external_table_1 UNION external_table_2: %d\n", count)
```
Full Example
Open telemetry {#open-telemetry}
ClickHouse allows a
trace context
to be passed as part of the native protocol. The client allows a Span to be created via the function
clickhouse.withSpan
and passed via the Context to achieve this.
go
var count uint64
rows := conn.QueryRow(clickhouse.Context(context.Background(), clickhouse.WithSpan(
trace.NewSpanContext(trace.SpanContextConfig{
SpanID: trace.SpanID{1, 2, 3, 4, 5},
TraceID: trace.TraceID{5, 4, 3, 2, 1},
}),
)), "SELECT COUNT() FROM (SELECT number FROM system.numbers LIMIT 5)")
if err := rows.Scan(&count); err != nil {
return err
}
fmt.Printf("count: %d\n", count)
Full Example
Full details on exploiting tracing can be found under
OpenTelemetry support
.
Database/SQL API {#databasesql-api}
The
database/sql
or "standard" API allows users to use the client in scenarios where application code should be agnostic of the underlying databases by conforming to a standard interface. This comes at some expense - additional layers of abstraction and indirection and primitives which are not necessarily aligned with ClickHouse. These costs are, however, typically acceptable in scenarios where tooling needs to connect to multiple databases.
Additionally, this client supports using HTTP as the transport layer - data will still be encoded in the native format for optimal performance.
The following aims to mirror the structure of the documentation for the ClickHouse API.
Full code examples for the standard API can be found
here
.
Connecting {#connecting-1}
Connection can be achieved either via a DSN string with the format
clickhouse://<host>:<port>?<query_option>=<value>
and
Open
method or via the
clickhouse.OpenDB
method. The latter is not part of the
database/sql
specification but returns a
sql.DB
instance. This method provides functionality such as profiling, for which there are no obvious means of exposing through the
database/sql
specification. | {"source_file": "index.md"} | [
0.038089554756879807,
-0.02472592331469059,
-0.07472044974565506,
0.06062643602490425,
-0.007999022491276264,
-0.09972554445266724,
0.09605559706687927,
0.09076593816280365,
-0.048997074365615845,
-0.02567649818956852,
0.06426502019166946,
-0.06355327367782593,
0.06485534459352493,
-0.0585... |
eb48fbea-e41d-4ab4-a4aa-de105e050047 | ```go
func Connect() error {
env, err := GetStdTestEnvironment()
if err != nil {
return err
}
conn := clickhouse.OpenDB(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.Port)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
})
return conn.Ping()
}
func ConnectDSN() error {
env, err := GetStdTestEnvironment()
if err != nil {
return err
}
conn, err := sql.Open("clickhouse", fmt.Sprintf("clickhouse://%s:%d?username=%s&password=%s", env.Host, env.Port, env.Username, env.Password))
if err != nil {
return err
}
return conn.Ping()
}
```
Full Example
For all subsequent examples, unless explicitly shown, we assume the use of the ClickHouse
conn
variable has been created and is available.
Connection settings {#connection-settings-1}
The following parameters can be passed in the DSN string:
hosts
- comma-separated list of single address hosts for load-balancing and failover - see
Connecting to Multiple Nodes
.
username/password
- auth credentials - see
Authentication
database
- select the current default database
dial_timeout
- a duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix such as
300ms
,
1s
. Valid time units are
ms
,
s
,
m
.
connection_open_strategy
-
random/in_order
(default
random
) - see
Connecting to Multiple Nodes
round_robin
- choose a round-robin server from the set
in_order
- first live server is chosen in specified order
debug
- enable debug output (boolean value)
compress
- specify the compression algorithm -
none
(default),
zstd
,
lz4
,
gzip
,
deflate
,
br
. If set to
true
,
lz4
will be used. Only
lz4
and
zstd
are supported for native communication.
compress_level
- Level of compression (default is
0
). See Compression. This is algorithm specific:
gzip
-
-2
(Best Speed) to
9
(Best Compression)
deflate
-
-2
(Best Speed) to
9
(Best Compression)
br
-
0
(Best Speed) to
11
(Best Compression)
zstd
,
lz4
- ignored
secure
- establish secure SSL connection (default is
false
)
skip_verify
- skip certificate verification (default is
false
)
block_buffer_size
- allows users to control the block buffer size. See
BlockBufferSize
. (default is
2
) | {"source_file": "index.md"} | [
0.03280875086784363,
-0.052622389048337936,
-0.11085986346006393,
-0.014051140286028385,
-0.06049586832523346,
-0.03749197721481323,
0.015992091968655586,
0.03717736899852753,
-0.05492662265896797,
-0.018330244347453117,
0.016871510073542595,
-0.035793159157037735,
-0.021083056926727295,
0... |
d6fb7607-4534-4ffe-9797-cf9905fac97f | skip_verify
- skip certificate verification (default is
false
)
block_buffer_size
- allows users to control the block buffer size. See
BlockBufferSize
. (default is
2
)
go
func ConnectSettings() error {
env, err := GetStdTestEnvironment()
if err != nil {
return err
}
conn, err := sql.Open("clickhouse", fmt.Sprintf("clickhouse://127.0.0.1:9001,127.0.0.1:9002,%s:%d/%s?username=%s&password=%s&dial_timeout=10s&connection_open_strategy=round_robin&debug=true&compress=lz4", env.Host, env.Port, env.Database, env.Username, env.Password))
if err != nil {
return err
}
return conn.Ping()
}
Full Example
Connection pooling {#connection-pooling-1}
Users can influence the use of the provided list of node addresses as described in
Connecting to Multiple Nodes
. Connection management and pooling is, however, delegated to
sql.DB
by design.
Connecting over HTTP {#connecting-over-http}
By default, connections are established over the native protocol. For users needing HTTP, this can be enabled by either modifying the DSN to include the HTTP protocol or by specifying the Protocol in the connection options.
```go
func ConnectHTTP() error {
env, err := GetStdTestEnvironment()
if err != nil {
return err
}
conn := clickhouse.OpenDB(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.HttpPort)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
Protocol: clickhouse.HTTP,
})
return conn.Ping()
}
func ConnectDSNHTTP() error {
env, err := GetStdTestEnvironment()
if err != nil {
return err
}
conn, err := sql.Open("clickhouse", fmt.Sprintf("http://%s:%d?username=%s&password=%s", env.Host, env.HttpPort, env.Username, env.Password))
if err != nil {
return err
}
return conn.Ping()
}
```
Full Example
Connecting to multiple nodes {#connecting-to-multiple-nodes-1}
If using
OpenDB
, connect to multiple hosts using the same options approach as that used for the ClickHouse API - optionally specifying the
ConnOpenStrategy
.
For DSN-based connections, the string accepts multiple hosts and a
connection_open_strategy
parameter for which the value
round_robin
or
in_order
can be set. | {"source_file": "index.md"} | [
0.025048233568668365,
0.061572857201099396,
-0.11743662506341934,
-0.00870101060718298,
-0.008334146812558174,
-0.08529640734195709,
0.07009485363960266,
0.020910998806357384,
-0.061027493327856064,
-0.020494403317570686,
0.023733731359243393,
-0.024415744468569756,
-0.005877216812223196,
... |
3398cefb-f497-432c-98e5-dfb58f2e915f | For DSN-based connections, the string accepts multiple hosts and a
connection_open_strategy
parameter for which the value
round_robin
or
in_order
can be set.
```go
func MultiStdHost() error {
env, err := GetStdTestEnvironment()
if err != nil {
return err
}
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{"127.0.0.1:9001", "127.0.0.1:9002", fmt.Sprintf("%s:%d", env.Host, env.Port)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
ConnOpenStrategy: clickhouse.ConnOpenRoundRobin,
})
if err != nil {
return err
}
v, err := conn.ServerVersion()
if err != nil {
return err
}
fmt.Println(v.String())
return nil
}
func MultiStdHostDSN() error {
env, err := GetStdTestEnvironment()
if err != nil {
return err
}
conn, err := sql.Open("clickhouse", fmt.Sprintf("clickhouse://127.0.0.1:9001,127.0.0.1:9002,%s:%d?username=%s&password=%s&connection_open_strategy=round_robin", env.Host, env.Port, env.Username, env.Password))
if err != nil {
return err
}
return conn.Ping()
}
```
Full Example
Using TLS {#using-tls-1}
If using a DSN connection string, SSL can be enabled via the parameter "secure=true". The
OpenDB
method utilizes the same approach as the
native API for TLS
, relying on the specification of a non-nil TLS struct. While the DSN connection string supports the parameter skip_verify to skip SSL verification, the
OpenDB
method is required for more advanced TLS configurations - since it permits the passing of a configuration.
```go
func ConnectSSL() error {
env, err := GetStdTestEnvironment()
if err != nil {
return err
}
cwd, err := os.Getwd()
if err != nil {
return err
}
t := &tls.Config{}
caCert, err := ioutil.ReadFile(path.Join(cwd, "../../tests/resources/CAroot.crt"))
if err != nil {
return err
}
caCertPool := x509.NewCertPool()
successful := caCertPool.AppendCertsFromPEM(caCert)
if !successful {
return err
}
t.RootCAs = caCertPool
conn := clickhouse.OpenDB(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.SslPort)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
TLS: t,
})
return conn.Ping()
} | {"source_file": "index.md"} | [
0.02618490718305111,
-0.023439079523086548,
-0.14019536972045898,
-0.010950186289846897,
-0.08456610888242722,
-0.06327734887599945,
0.00012777688971254975,
0.029434440657496452,
-0.04607071354985237,
-0.032949671149253845,
-0.00918519962579012,
-0.0038725584745407104,
-0.026763824746012688,... |
dbbe5178-19f7-4c9c-bdb2-95f58aaa6509 | }
func ConnectDSNSSL() error {
env, err := GetStdTestEnvironment()
if err != nil {
return err
}
conn, err := sql.Open("clickhouse", fmt.Sprintf("https://%s:%d?secure=true&skip_verify=true&username=%s&password=%s", env.Host, env.HttpsPort, env.Username, env.Password))
if err != nil {
return err
}
return conn.Ping()
}
```
Full Example
Authentication {#authentication-1}
If using
OpenDB
, authentication information can be passed via the usual options. For DSN-based connections, a username and password can be passed in the connection string - either as parameters or as credentials encoded in the address.
```go
func ConnectAuth() error {
env, err := GetStdTestEnvironment()
if err != nil {
return err
}
conn := clickhouse.OpenDB(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.Port)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
})
return conn.Ping()
}
func ConnectDSNAuth() error {
env, err := GetStdTestEnvironment()
conn, err := sql.Open("clickhouse", fmt.Sprintf("http://%s:%d?username=%s&password=%s", env.Host, env.HttpPort, env.Username, env.Password))
if err != nil {
return err
}
if err = conn.Ping(); err != nil {
return err
}
conn, err = sql.Open("clickhouse", fmt.Sprintf("http://%s:%s@%s:%d", env.Username, env.Password, env.Host, env.HttpPort))
if err != nil {
return err
}
return conn.Ping()
}
```
Full Example
Execution {#execution-1}
Once a connection has been obtained, users can issue
sql
statements for execution via the Exec method.
go
conn.Exec(`DROP TABLE IF EXISTS example`)
_, err = conn.Exec(`
CREATE TABLE IF NOT EXISTS example (
Col1 UInt8,
Col2 String
) engine=Memory
`)
if err != nil {
return err
}
_, err = conn.Exec("INSERT INTO example VALUES (1, 'test-1')")
Full Example
This method does not support receiving a context - by default, it executes with the background context. Users can use
ExecContext
if this is needed - see
Using Context
.
Batch Insert {#batch-insert-1}
Batch semantics can be achieved by creating a
sql.Tx
via the
Being
method. From this, a batch can be obtained using the
Prepare
method with the
INSERT
statement. This returns a
sql.Stmt
to which rows can be appended using the
Exec
method. The batch will be accumulated in memory until
Commit
is executed on the original
sql.Tx
. | {"source_file": "index.md"} | [
-0.04417688399553299,
-0.01685931347310543,
-0.13119155168533325,
-0.000826966657768935,
-0.049510207027196884,
-0.06655293703079224,
0.03235604986548424,
0.06388821452856064,
-0.021273503080010414,
-0.03750044107437134,
-0.01147119514644146,
-0.009144757874310017,
0.024763982743024826,
0.... |
c89a2f70-5943-4baa-86ce-36f70538648e | go
batch, err := scope.Prepare("INSERT INTO example")
if err != nil {
return err
}
for i := 0; i < 1000; i++ {
_, err := batch.Exec(
uint8(42),
"ClickHouse", "Inc",
uuid.New(),
map[string]uint8{"key": 1}, // Map(String, UInt8)
[]string{"Q", "W", "E", "R", "T", "Y"}, // Array(String)
[]interface{}{ // Tuple(String, UInt8, Array(Map(String, String)))
"String Value", uint8(5), []map[string]string{
map[string]string{"key": "value"},
map[string]string{"key": "value"},
map[string]string{"key": "value"},
},
},
time.Now(),
)
if err != nil {
return err
}
}
return scope.Commit()
Full Example
Querying row/s {#querying-rows-1}
Querying a single row can be achieved using the
QueryRow
method. This returns a *sql.Row, on which Scan can be invoked with pointers to variables into which the columns should be marshaled. A
QueryRowContext
variant allows a context to be passed other than background - see
Using Context
.
go
row := conn.QueryRow("SELECT * FROM example")
var (
col1 uint8
col2, col3, col4 string
col5 map[string]uint8
col6 []string
col7 interface{}
col8 time.Time
)
if err := row.Scan(&col1, &col2, &col3, &col4, &col5, &col6, &col7, &col8); err != nil {
return err
}
Full Example
Iterating multiple rows requires the
Query
method. This returns a
*sql.Rows
struct on which Next can be invoked to iterate through the rows.
QueryContext
equivalent allows passing of a context.
go
rows, err := conn.Query("SELECT * FROM example")
if err != nil {
return err
}
var (
col1 uint8
col2, col3, col4 string
col5 map[string]uint8
col6 []string
col7 interface{}
col8 time.Time
)
for rows.Next() {
if err := rows.Scan(&col1, &col2, &col3, &col4, &col5, &col6, &col7, &col8); err != nil {
return err
}
fmt.Printf("row: col1=%d, col2=%s, col3=%s, col4=%s, col5=%v, col6=%v, col7=%v, col8=%v\n", col1, col2, col3, col4, col5, col6, col7, col8)
}
Full Example
Async Insert {#async-insert-1}
Asynchronous inserts can be achieved by executing an insert via the
ExecContext
method. This should be passed a context with asynchronous mode enabled, as shown below. This allows the user to specify whether the client should wait for the server to complete the insert or respond once the data has been received. This effectively controls the parameter
wait_for_async_insert
. | {"source_file": "index.md"} | [
0.003971728961914778,
0.03707234561443329,
-0.026000095531344414,
0.015745069831609726,
-0.1090526357293129,
-0.024838274344801903,
0.09136952459812164,
-0.01993914507329464,
-0.10384131222963333,
-0.01242354791611433,
0.05925694853067398,
-0.029770782217383385,
0.013055162504315376,
-0.00... |
e67b17c3-0598-4e64-81d1-591fa602a603 | go
const ddl = `
CREATE TABLE example (
Col1 UInt64
, Col2 String
, Col3 Array(UInt8)
, Col4 DateTime
) ENGINE = Memory
`
if _, err := conn.Exec(ddl); err != nil {
return err
}
ctx := clickhouse.Context(context.Background(), clickhouse.WithStdAsync(false))
{
for i := 0; i < 100; i++ {
_, err := conn.ExecContext(ctx, fmt.Sprintf(`INSERT INTO example VALUES (
%d, '%s', [1, 2, 3, 4, 5, 6, 7, 8, 9], now()
)`, i, "Golang SQL database driver"))
if err != nil {
return err
}
}
}
Full Example
Columnar Insert {#columnar-insert-1}
Not supported using the standard interface.
Using structs {#using-structs-1}
Not supported using the standard interface.
Type conversions {#type-conversions-1}
The standard
database/sql
interface should support the same types as the
ClickHouse API
. There are a few exceptions, primarily for complex types, that we document below. Similar to the ClickHouse API, the client aims to be as flexible as possible concerning accepting variable types for both insertion and marshaling of responses. See
Type Conversions
for further details.
Complex types {#complex-types-1}
Unless stated, complex type handling should be the same as the
ClickHouse API
. Differences are a result of
database/sql
internals.
Maps {#maps}
Unlike the ClickHouse API, the standard API requires maps to be strongly typed at scan type. For example, users cannot pass a
map[string]interface{}
for a
Map(String,String)
field and must use a
map[string]string
instead. An
interface{}
variable will always be compatible and can be used for more complex structures. Structs are not supported at read time.
go
var (
col1Data = map[string]uint64{
"key_col_1_1": 1,
"key_col_1_2": 2,
}
col2Data = map[string]uint64{
"key_col_2_1": 10,
"key_col_2_2": 20,
}
col3Data = map[string]uint64{}
col4Data = []map[string]string{
{"A": "B"},
{"C": "D"},
}
col5Data = map[string]uint64{
"key_col_5_1": 100,
"key_col_5_2": 200,
}
)
if _, err := batch.Exec(col1Data, col2Data, col3Data, col4Data, col5Data); err != nil {
return err
}
if err = scope.Commit(); err != nil {
return err
}
var (
col1 interface{}
col2 map[string]uint64
col3 map[string]uint64
col4 []map[string]string
col5 map[string]uint64
)
if err := conn.QueryRow("SELECT * FROM example").Scan(&col1, &col2, &col3, &col4, &col5); err != nil {
return err
}
fmt.Printf("col1=%v, col2=%v, col3=%v, col4=%v, col5=%v", col1, col2, col3, col4, col5)
Full Example
Insert behavior is the same as the ClickHouse API.
Compression {#compression-1} | {"source_file": "index.md"} | [
0.02989455871284008,
-0.02396346814930439,
-0.11974504590034485,
0.03606473281979561,
-0.0657220110297203,
-0.046331290155649185,
0.045557718724012375,
0.07690586894750595,
-0.07218274474143982,
-0.01811189390718937,
0.05474970489740372,
-0.032121866941452026,
0.002234425861388445,
-0.0200... |
5c662182-2dd8-4f17-9e98-a3937f3889b8 | Full Example
Insert behavior is the same as the ClickHouse API.
Compression {#compression-1}
The standard API supports the same compression algorithms as native
ClickHouse API
i.e.
lz4
and
zstd
compression at a block level. In addition, gzip, deflate and br compression are supported for HTTP connections. If any of these are enabled, compression is performed on blocks during insertion and for query responses. Other requests e.g. pings or query requests, will remain uncompressed. This is consistent with
lz4
and
zstd
options.
If using the
OpenDB
method to establish a connection, a Compression configuration can be passed. This includes the ability to specify the compression level (see below). If connecting via
sql.Open
with DSN, utilize the parameter
compress
. This can either be a specific compression algorithm i.e.
gzip
,
deflate
,
br
,
zstd
or
lz4
or a boolean flag. If set to true,
lz4
will be used. The default is
none
i.e. compression disabled.
go
conn := clickhouse.OpenDB(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.HttpPort)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
Compression: &clickhouse.Compression{
Method: clickhouse.CompressionBrotli,
Level: 5,
},
Protocol: clickhouse.HTTP,
})
Full Example
go
conn, err := sql.Open("clickhouse", fmt.Sprintf("http://%s:%d?username=%s&password=%s&compress=gzip&compress_level=5", env.Host, env.HttpPort, env.Username, env.Password))
Full Example
The level of applied compression can be controlled by the DSN parameter compress_level or the Level field of the Compression option. This defaults to 0 but is algorithm specific:
gzip
-
-2
(Best Speed) to
9
(Best Compression)
deflate
-
-2
(Best Speed) to
9
(Best Compression)
br
-
0
(Best Speed) to
11
(Best Compression)
zstd
,
lz4
- ignored
Parameter binding {#parameter-binding-1}
The standard API supports the same parameter binding capabilities as the
ClickHouse API
, allowing parameters to be passed to the
Exec
,
Query
and
QueryRow
methods (and their equivalent
Context
variants). Positional, named and numbered parameters are supported. | {"source_file": "index.md"} | [
-0.0743727907538414,
0.00485199736431241,
-0.11024242639541626,
0.06649938225746155,
-0.009795483201742172,
-0.07377006858587265,
-0.028587084263563156,
-0.004101582802832127,
-0.005764069501310587,
-0.03700092434883118,
-0.057529184967279434,
0.08003205060958862,
0.038591522723436356,
-0.... |
0baf358e-add3-4cb6-94ba-bea2661a6b46 | go
var count uint64
// positional bind
if err = conn.QueryRow(ctx, "SELECT count() FROM example WHERE Col1 >= ? AND Col3 < ?", 500, now.Add(time.Duration(750)*time.Second)).Scan(&count); err != nil {
return err
}
// 250
fmt.Printf("Positional bind count: %d\n", count)
// numeric bind
if err = conn.QueryRow(ctx, "SELECT count() FROM example WHERE Col1 <= $2 AND Col3 > $1", now.Add(time.Duration(150)*time.Second), 250).Scan(&count); err != nil {
return err
}
// 100
fmt.Printf("Numeric bind count: %d\n", count)
// named bind
if err = conn.QueryRow(ctx, "SELECT count() FROM example WHERE Col1 <= @col1 AND Col3 > @col3", clickhouse.Named("col1", 100), clickhouse.Named("col3", now.Add(time.Duration(50)*time.Second))).Scan(&count); err != nil {
return err
}
// 50
fmt.Printf("Named bind count: %d\n", count)
Full Example
Note
special cases
still apply.
Using context {#using-context-1}
The standard API supports the same ability to pass deadlines, cancellation signals, and other request-scoped values via the context as the
ClickHouse API
. Unlike the ClickHouse API, this is achieved by using
Context
variants of the methods i.e. methods such as
Exec
, which use the background context by default, have a variant
ExecContext
to which a context can be passed as the first parameter. This allows a context to be passed at any stage of an application flow. For example, users can pass a context when establishing a connection via
ConnContext
or when requesting a query row via
QueryRowContext
. Examples of all available methods are shown below.
For more detail on using the context to pass deadlines, cancellation signals, query ids, quota keys and connection settings see Using Context for the
ClickHouse API
.
``go
ctx := clickhouse.Context(context.Background(), clickhouse.WithSettings(clickhouse.Settings{
"allow_experimental_object_type": "1",
}))
conn.ExecContext(ctx, "DROP TABLE IF EXISTS example")
// to create a JSON column we need allow_experimental_object_type=1
if _, err = conn.ExecContext(ctx,
CREATE TABLE example (
Col1 JSON
)
Engine Memory
`); err != nil {
return err
}
// queries can be cancelled using the context
ctx, cancel := context.WithCancel(context.Background())
go func() {
cancel()
}()
if err = conn.QueryRowContext(ctx, "SELECT sleep(3)").Scan(); err == nil {
return fmt.Errorf("expected cancel")
}
// set a deadline for a query - this will cancel the query after the absolute time is reached. Again terminates the connection only,
// queries will continue to completion in ClickHouse
ctx, cancel = context.WithDeadline(context.Background(), time.Now().Add(-time.Second))
defer cancel()
if err := conn.PingContext(ctx); err == nil {
return fmt.Errorf("expected deadline exceeeded")
} | {"source_file": "index.md"} | [
0.06351649761199951,
-0.038680631667375565,
-0.06859367340803146,
0.03331071138381958,
-0.09938305616378784,
-0.01181596890091896,
0.04316822066903114,
0.05601617321372032,
0.013871475122869015,
-0.03398120403289795,
0.025354323908686638,
-0.032985687255859375,
0.061499692499637604,
-0.048... |
efa4fc4e-0e98-48a2-84d6-03f00933b0a3 | // set a query id to assist tracing queries in logs e.g. see system.query_log
var one uint8
ctx = clickhouse.Context(context.Background(), clickhouse.WithQueryID(uuid.NewString()))
if err = conn.QueryRowContext(ctx, "SELECT 1").Scan(&one); err != nil {
return err
}
conn.ExecContext(context.Background(), "DROP QUOTA IF EXISTS foobar")
defer func() {
conn.ExecContext(context.Background(), "DROP QUOTA IF EXISTS foobar")
}()
ctx = clickhouse.Context(context.Background(), clickhouse.WithQuotaKey("abcde"))
// set a quota key - first create the quota
if _, err = conn.ExecContext(ctx, "CREATE QUOTA IF NOT EXISTS foobar KEYED BY client_key FOR INTERVAL 1 minute MAX queries = 5 TO default"); err != nil {
return err
}
// queries can be cancelled using the context
ctx, cancel = context.WithCancel(context.Background())
// we will get some results before cancel
ctx = clickhouse.Context(ctx, clickhouse.WithSettings(clickhouse.Settings{
"max_block_size": "1",
}))
rows, err := conn.QueryContext(ctx, "SELECT sleepEachRow(1), number FROM numbers(100);")
if err != nil {
return err
}
var (
col1 uint8
col2 uint8
)
for rows.Next() {
if err := rows.Scan(&col1, &col2); err != nil {
if col2 > 3 {
fmt.Println("expected cancel")
return nil
}
return err
}
fmt.Printf("row: col2=%d\n", col2)
if col2 == 3 {
cancel()
}
}
```
Full Example
Sessions {#sessions}
While native connections inherently have a session, connections over HTTP require the user to create a session id for passing in a context as a setting. This allows the use of features, e.g., Temporary tables, which are bound to a session.
go
conn := clickhouse.OpenDB(&clickhouse.Options{
Addr: []string{fmt.Sprintf("%s:%d", env.Host, env.HttpPort)},
Auth: clickhouse.Auth{
Database: env.Database,
Username: env.Username,
Password: env.Password,
},
Protocol: clickhouse.HTTP,
Settings: clickhouse.Settings{
"session_id": uuid.NewString(),
},
})
if _, err := conn.Exec(`DROP TABLE IF EXISTS example`); err != nil {
return err
}
_, err = conn.Exec(`
CREATE TEMPORARY TABLE IF NOT EXISTS example (
Col1 UInt8
)
`)
if err != nil {
return err
}
scope, err := conn.Begin()
if err != nil {
return err
}
batch, err := scope.Prepare("INSERT INTO example")
if err != nil {
return err
}
for i := 0; i < 10; i++ {
_, err := batch.Exec(
uint8(i),
)
if err != nil {
return err
}
}
rows, err := conn.Query("SELECT * FROM example")
if err != nil {
return err
}
var (
col1 uint8
)
for rows.Next() {
if err := rows.Scan(&col1); err != nil {
return err
}
fmt.Printf("row: col1=%d\n", col1)
}
Full Example
Dynamic scanning {#dynamic-scanning-1} | {"source_file": "index.md"} | [
-0.07554128021001816,
-0.011798392049968243,
-0.06567132472991943,
0.06040661409497261,
-0.08185736835002899,
-0.056343924254179,
0.14927950501441956,
0.008057528175413609,
0.03409494459629059,
-0.003939390182495117,
-0.02070096880197525,
-0.0970120057463646,
0.05744178593158722,
-0.029521... |
78e83bb4-a366-40fc-9037-3e904934a30e | Full Example
Dynamic scanning {#dynamic-scanning-1}
Similar to the
ClickHouse API
, column type information is available to allow users to create runtime instances of correctly typed variables which can be passed to Scan. This allows columns to be read where the type is not known.
go
const query = `
SELECT
1 AS Col1
, 'Text' AS Col2
`
rows, err := conn.QueryContext(context.Background(), query)
if err != nil {
return err
}
columnTypes, err := rows.ColumnTypes()
if err != nil {
return err
}
vars := make([]interface{}, len(columnTypes))
for i := range columnTypes {
vars[i] = reflect.New(columnTypes[i].ScanType()).Interface()
}
for rows.Next() {
if err := rows.Scan(vars...); err != nil {
return err
}
for _, v := range vars {
switch v := v.(type) {
case *string:
fmt.Println(*v)
case *uint8:
fmt.Println(*v)
}
}
}
Full Example
External tables {#external-tables-1}
External tables
allow the client to send data to ClickHouse, with a
SELECT
query. This data is put in a temporary table and can be used in the query itself for evaluation.
To send external data to the client with a query, the user must build an external table via
ext.NewTable
before passing this via the context.
```go
table1, err := ext.NewTable("external_table_1",
ext.Column("col1", "UInt8"),
ext.Column("col2", "String"),
ext.Column("col3", "DateTime"),
)
if err != nil {
return err
}
for i := 0; i < 10; i++ {
if err = table1.Append(uint8(i), fmt.Sprintf("value_%d", i), time.Now()); err != nil {
return err
}
}
table2, err := ext.NewTable("external_table_2",
ext.Column("col1", "UInt8"),
ext.Column("col2", "String"),
ext.Column("col3", "DateTime"),
)
for i := 0; i < 10; i++ {
table2.Append(uint8(i), fmt.Sprintf("value_%d", i), time.Now())
}
ctx := clickhouse.Context(context.Background(),
clickhouse.WithExternalTable(table1, table2),
)
rows, err := conn.QueryContext(ctx, "SELECT * FROM external_table_1")
if err != nil {
return err
}
for rows.Next() {
var (
col1 uint8
col2 string
col3 time.Time
)
rows.Scan(&col1, &col2, &col3)
fmt.Printf("col1=%d, col2=%s, col3=%v\n", col1, col2, col3)
}
rows.Close()
var count uint64
if err := conn.QueryRowContext(ctx, "SELECT COUNT(
) FROM external_table_1").Scan(&count); err != nil {
return err
}
fmt.Printf("external_table_1: %d\n", count)
if err := conn.QueryRowContext(ctx, "SELECT COUNT(
) FROM external_table_2").Scan(&count); err != nil {
return err
}
fmt.Printf("external_table_2: %d\n", count)
if err := conn.QueryRowContext(ctx, "SELECT COUNT(*) FROM (SELECT * FROM external_table_1 UNION ALL SELECT * FROM external_table_2)").Scan(&count); err != nil {
return err
}
fmt.Printf("external_table_1 UNION external_table_2: %d\n", count)
```
Full Example
Open telemetry {#open-telemetry-1} | {"source_file": "index.md"} | [
0.020243195816874504,
0.01417178101837635,
-0.10330481827259064,
0.09404585510492325,
-0.03488845005631447,
-0.04794951155781746,
0.08412451297044754,
0.039159249514341354,
-0.10803943127393723,
-0.09106161445379257,
-0.010881495662033558,
0.018230395391583443,
0.004671749658882618,
0.0375... |
893bbe64-b082-4329-8098-d1c01f58648b | Full Example
Open telemetry {#open-telemetry-1}
ClickHouse allows a
trace context
to be passed as part of the native protocol. The client allows a Span to be created via the function
clickhouse.withSpan
and passed via the Context to achieve this. This is not supported when HTTP is used as transport.
go
var count uint64
rows := conn.QueryRowContext(clickhouse.Context(context.Background(), clickhouse.WithSpan(
trace.NewSpanContext(trace.SpanContextConfig{
SpanID: trace.SpanID{1, 2, 3, 4, 5},
TraceID: trace.TraceID{5, 4, 3, 2, 1},
}),
)), "SELECT COUNT() FROM (SELECT number FROM system.numbers LIMIT 5)")
if err := rows.Scan(&count); err != nil {
return err
}
fmt.Printf("count: %d\n", count)
Full Example
Performance tips {#performance-tips}
Utilize the ClickHouse API where possible, especially for primitive types. This avoids significant reflection and indirection.
If reading large datasets, consider modifying the
BlockBufferSize
. This will increase the memory footprint but will mean more blocks can be decoded in parallel during row iteration. The default value of 2 is conservative and minimizes memory overhead. Higher values will mean more blocks in memory. This requires testing since different queries can produce different block sizes. It can therefore be set on a
query level
via the Context.
Be specific with your types when inserting data. While the client aims to be flexible, e.g., allowing strings to be parsed for UUIDs or IPs, this requires data validation and incurs a cost at insert time.
Use column-oriented inserts where possible. Again these should be strongly typed, avoiding the need for the client to convert your values.
Follow ClickHouse
recommendations
for optimal insert performance. | {"source_file": "index.md"} | [
0.02855968475341797,
-0.040726691484451294,
-0.0913168415427208,
0.04164661094546318,
-0.03433360531926155,
-0.07976523041725159,
0.12203458696603775,
0.005645619705319405,
0.0016672107158228755,
-0.03705984726548195,
0.027598228305578232,
0.013461645692586899,
0.01766490936279297,
-0.0128... |
28afcb16-2887-4d61-852a-064cef47a318 | sidebar_label: 'Advanced Inserting'
sidebar_position: 5
keywords: ['clickhouse', 'python', 'insert', 'advanced']
description: 'Advanced Inserting with ClickHouse Connect'
slug: /integrations/language-clients/python/advanced-inserting
title: 'Advanced Inserting'
doc_type: 'reference'
Inserting data with ClickHouse Connect: Advanced usage {#inserting-data-with-clickhouse-connect--advanced-usage}
InsertContexts {#insertcontexts}
ClickHouse Connect executes all inserts within an
InsertContext
. The
InsertContext
includes all the values sent as arguments to the client
insert
method. In addition, when an
InsertContext
is originally constructed, ClickHouse Connect retrieves the data types for the insert columns required for efficient Native format inserts. By reusing the
InsertContext
for multiple inserts, this "pre-query" is avoided and inserts are executed more quickly and efficiently.
An
InsertContext
can be acquired using the client
create_insert_context
method. The method takes the same arguments as the
insert
function. Note that only the
data
property of
InsertContext
s should be modified for reuse. This is consistent with its intended purpose of providing a reusable object for repeated inserts of new data to the same table.
python
test_data = [[1, 'v1', 'v2'], [2, 'v3', 'v4']]
ic = test_client.create_insert_context(table='test_table', data='test_data')
client.insert(context=ic)
assert client.command('SELECT count() FROM test_table') == 2
new_data = [[3, 'v5', 'v6'], [4, 'v7', 'v8']]
ic.data = new_data
client.insert(context=ic)
qr = test_client.query('SELECT * FROM test_table ORDER BY key DESC')
assert qr.row_count == 4
assert qr[0][0] == 4
InsertContext
s include mutable state that is updated during the insert process, so they are not thread safe.
Write formats {#write-formats}
Write formats are currently implemented for limited number of types. In most cases ClickHouse Connect will attempt to automatically determine the correct write format for a column by checking the type of the first (non-null) data value. For example, if inserting into a
DateTime
column, and the first insert value of the column is a Python integer, ClickHouse Connect will directly insert the integer value under the assumption that it's actually an epoch second.
In most cases, it is unnecessary to override the write format for a data type, but the associated methods in the
clickhouse_connect.datatypes.format
package can be used to do so at a global level.
Write format options {#write-format-options} | {"source_file": "advanced-inserting.md"} | [
-0.025435741990804672,
-0.022029587998986244,
-0.04109521210193634,
0.0724455937743187,
-0.09740090370178223,
-0.018320027738809586,
0.023171190172433853,
0.02975141815841198,
-0.0626935139298439,
-0.011199683882296085,
0.06944604218006134,
0.0023938820231705904,
0.11033493280410767,
-0.06... |
e9494216-837d-4d3a-89ef-529f7d0d0155 | | ClickHouse Type | Native Python Type | Write Formats | Comments |
|-----------------------|-------------------------|-------------------|-------------------------------------------------------------------------------------------------------------|
| Int[8-64], UInt[8-32] | int | - | |
| UInt64 | int | | |
| [U]Int[128,256] | int | | |
| BFloat16 | float | | |
| Float32 | float | | |
| Float64 | float | | |
| Decimal | decimal.Decimal | | |
| String | string | | |
| FixedString | bytes | string | If inserted as a string, additional bytes will be set to zeros |
| Enum[8,16] | string | | |
| Date | datetime.date | int | ClickHouse stores Dates as days since 01/01/1970. int types will be assumed to be this "epoch date" value |
| Date32 | datetime.date | int | Same as Date, but for a wider range of dates |
| DateTime | datetime.datetime | int | ClickHouse stores DateTime in epoch seconds. int types will be assumed to be this "epoch second" value |
| DateTime64 | datetime.datetime | int | Python datetime.datetime is limited to microsecond precision. The raw 64 bit int value is available | | {"source_file": "advanced-inserting.md"} | [
-0.023651115596294403,
0.0011578816920518875,
-0.13633474707603455,
-0.008949181996285915,
-0.05894972011446953,
-0.10561346262693405,
0.013333704322576523,
0.04760371148586273,
-0.06549476832151413,
-0.054204218089580536,
0.010586608201265335,
-0.08680595457553864,
0.025954168289899826,
0... |
66980582-e6e0-4087-b19c-c8098ec71d6a | | DateTime64 | datetime.datetime | int | Python datetime.datetime is limited to microsecond precision. The raw 64 bit int value is available |
| Time | datetime.timedelta | int, string, time | ClickHouse stores DateTime in epoch seconds. int types will be assumed to be this "epoch second" value |
| Time64 | datetime.timedelta | int, string, time | Python datetime.timedelta is limited to microsecond precision. The raw 64 bit int value is available |
| IPv4 |
ipaddress.IPv4Address
| string | Properly formatted strings can be inserted as IPv4 addresses |
| IPv6 |
ipaddress.IPv6Address
| string | Properly formatted strings can be inserted as IPv6 addresses |
| Tuple | dict or tuple | | |
| Map | dict | | |
| Nested | Sequence[dict] | | |
| UUID | uuid.UUID | string | Properly formatted strings can be inserted as ClickHouse UUIDs |
| JSON/Object('json') | dict | string | Either dictionaries or JSON strings can be inserted into JSON Columns (note
Object('json')
is deprecated) |
| Variant | object | | At this time on all variants are inserted as Strings and parsed by the ClickHouse server |
| Dynamic | object | | Warning -- at this time any inserts into a Dynamic column are persisted as a ClickHouse String | | {"source_file": "advanced-inserting.md"} | [
-0.02253599837422371,
0.02101954258978367,
-0.03616553917527199,
-0.018679574131965637,
-0.09128992259502411,
-0.06673648208379745,
-0.10405107587575912,
-0.01128420140594244,
-0.03306521847844124,
-0.041469208896160126,
0.005107345059514046,
-0.044400617480278015,
-0.048720259219408035,
-... |
3658c952-95dd-401b-9a1f-1e7b2aaff922 | Specialized insert methods {#specialized-insert-methods}
ClickHouse Connect provides specialized insert methods for common data formats:
insert_df
-- Insert a Pandas DataFrame. Instead of a Python Sequence of Sequences
data
argument, the second parameter of this method requires a
df
argument that must be a Pandas DataFrame instance. ClickHouse Connect automatically processes the DataFrame as a column oriented datasource, so the
column_oriented
parameter is not required or available.
insert_arrow
-- Insert a PyArrow Table. ClickHouse Connect passes the Arrow table unmodified to the ClickHouse server for processing, so only the
database
and
settings
arguments are available in addition to
table
and
arrow_table
.
insert_df_arrow
-- Insert an arrow-backed Pandas DataFrame or a Polars DataFrame. ClickHouse Connect will automatically determine if the DataFrame is a Pandas or Polars type. If Pandas, validation will be performed to ensure that each column's dtype backend is Arrow-based and an error will be raised if any are not.
:::note
A NumPy array is a valid Sequence of Sequences and can be used as the
data
argument to the main
insert
method, so a specialized method is not required.
:::
Pandas DataFrame insert {#pandas-dataframe-insert}
```python
import clickhouse_connect
import pandas as pd
client = clickhouse_connect.get_client()
df = pd.DataFrame({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Joe"],
"age": [25, 30, 28],
})
client.insert_df("users", df)
```
PyArrow Table insert {#pyarrow-table-insert}
```python
import clickhouse_connect
import pyarrow as pa
client = clickhouse_connect.get_client()
arrow_table = pa.table({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Joe"],
"age": [25, 30, 28],
})
client.insert_arrow("users", arrow_table)
```
Arrow-backed DataFrame insert (pandas 2.x) {#arrow-backed-dataframe-insert-pandas-2}
```python
import clickhouse_connect
import pandas as pd
client = clickhouse_connect.get_client()
Convert to Arrow-backed dtypes for better performance
df = pd.DataFrame({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Joe"],
"age": [25, 30, 28],
}).convert_dtypes(dtype_backend="pyarrow")
client.insert_df_arrow("users", df)
```
Time zones {#time-zones}
When inserting Python
datetime.datetime
objects into ClickHouse
DateTime
or
DateTime64
columns, ClickHouse Connect automatically handles timezone information. Since ClickHouse stores all DateTime values internally as timezone-naive Unix timestamps (seconds or fractional seconds since the epoch), timezone conversion happens automatically on the client side during insertion.
Timezone-aware datetime objects {#timezone-aware-datetime-objects} | {"source_file": "advanced-inserting.md"} | [
-0.02721036970615387,
-0.14681392908096313,
-0.12725314497947693,
0.006647138856351376,
-0.07895257323980331,
-0.07791635394096375,
-0.05322285741567612,
-0.03508380427956581,
-0.0642232745885849,
0.0013184247072786093,
0.04784364625811577,
0.06768833845853806,
-0.03738163039088249,
0.0083... |
c64bc98b-de91-4b91-ad7f-06d948bbcc33 | Timezone-aware datetime objects {#timezone-aware-datetime-objects}
If you insert a timezone-aware Python
datetime.datetime
object, ClickHouse Connect will automatically call
.timestamp()
to convert it to a Unix timestamp, which correctly accounts for the timezone offset. This means you can insert datetime objects from any timezone, and they will be correctly stored as their UTC equivalent timestamp.
```python
import clickhouse_connect
from datetime import datetime
import pytz
client = clickhouse_connect.get_client()
client.command("CREATE TABLE events (event_time DateTime) ENGINE Memory")
Insert timezone-aware datetime objects
denver_tz = pytz.timezone('America/Denver')
tokyo_tz = pytz.timezone('Asia/Tokyo')
data = [
[datetime(2023, 6, 15, 10, 30, 0, tzinfo=pytz.UTC)],
[denver_tz.localize(datetime(2023, 6, 15, 10, 30, 0))],
[tokyo_tz.localize(datetime(2023, 6, 15, 10, 30, 0))]
]
client.insert('events', data, column_names=['event_time'])
results = client.query("SELECT * from events")
print(*results.result_rows, sep="\n")
Output:
(datetime.datetime(2023, 6, 15, 10, 30),)
(datetime.datetime(2023, 6, 15, 16, 30),)
(datetime.datetime(2023, 6, 15, 1, 30),)
```
In this example, all three datetime objects represent different points in time because they have different timezones. Each will be correctly converted to its corresponding Unix timestamp and stored in ClickHouse.
:::note
When using pytz, you must use the
localize()
method to attach timezone information to a naive datetime. Passing
tzinfo=
directly to the datetime constructor will use incorrect historical offsets. For UTC,
tzinfo=pytz.UTC
works correctly. See
pytz docs
for more info.
:::
Timezone-naive datetime objects {#timezone-naive-datetime-objects}
If you insert a timezone-naive Python
datetime.datetime
object (one without
tzinfo
), the
.timestamp()
method will interpret it as being in the system's local timezone. To avoid ambiguity, it's recommended to:
Always use timezone-aware datetime objects when inserting, or
Ensure your system timezone is set to UTC, or
Manually convert to epoch timestamps before inserting
```python
import clickhouse_connect
from datetime import datetime
import pytz
client = clickhouse_connect.get_client()
Recommended: Always use timezone-aware datetimes
utc_time = datetime(2023, 6, 15, 10, 30, 0, tzinfo=pytz.UTC)
client.insert('events', [[utc_time]], column_names=['event_time'])
Alternative: Convert to epoch timestamp manually
naive_time = datetime(2023, 6, 15, 10, 30, 0)
epoch_timestamp = int(naive_time.replace(tzinfo=pytz.UTC).timestamp())
client.insert('events', [[epoch_timestamp]], column_names=['event_time'])
```
DateTime columns with timezone metadata {#datetime-columns-with-timezone-metadata} | {"source_file": "advanced-inserting.md"} | [
0.04864205792546272,
-0.04216340556740761,
-0.03774641454219818,
0.018343614414334297,
0.012840560637414455,
-0.08649180084466934,
-0.008075088262557983,
-0.03175313025712967,
0.018469762057065964,
0.0025973618030548096,
-0.038413140922784805,
-0.05742170289158821,
-0.08186522126197815,
0.... |
91af8dd2-5923-4c42-af2c-650d0c452c72 | DateTime columns with timezone metadata {#datetime-columns-with-timezone-metadata}
ClickHouse columns can be defined with timezone metadata (e.g.,
DateTime('America/Denver')
or
DateTime64(3, 'Asia/Tokyo')
). This metadata doesn't affect how data is stored (still as UTC timestamps), but it controls the timezone used when querying data back from ClickHouse.
When inserting into such columns, ClickHouse Connect converts your Python datetime to a Unix timestamp (accounting for its timezone if present). When you query the data back, ClickHouse Connect will return the datetime converted to the column's timezone, regardless of what timezone you used when inserting.
```python
import clickhouse_connect
from datetime import datetime
import pytz
client = clickhouse_connect.get_client()
Create table with Los Angeles timezone metadata
client.command("CREATE TABLE events (event_time DateTime('America/Los_Angeles')) ENGINE Memory")
Insert a New York time (10:30 AM EDT, which is 14:30 UTC)
ny_tz = pytz.timezone("America/New_York")
data = ny_tz.localize(datetime(2023, 6, 15, 10, 30, 0))
client.insert("events", [[data]], column_names=["event_time"])
When queried back, the time is automatically converted to Los Angeles timezone
10:30 AM New York (UTC-4) = 14:30 UTC = 7:30 AM Los Angeles (UTC-7)
results = client.query("select * from events")
print(*results.result_rows, sep="\n")
Output:
(datetime.datetime(2023, 6, 15, 7, 30, tzinfo=
),)
```
File inserts {#file-inserts}
The
clickhouse_connect.driver.tools
package includes the
insert_file
method that allows inserting data directly from the file system into an existing ClickHouse table. Parsing is delegated to the ClickHouse server.
insert_file
accepts the following parameters: | {"source_file": "advanced-inserting.md"} | [
0.10242974758148193,
-0.04610525071620941,
-0.07141166180372238,
0.008131883107125759,
0.0016085626557469368,
-0.10208522528409958,
-0.00995299220085144,
-0.027424132451415062,
0.022461656481027603,
0.008207297883927822,
0.014313954859972,
-0.03216699883341789,
-0.041811518371105194,
0.028... |
bf813995-d6b4-4421-9211-fbedc4e081c9 | | Parameter | Type | Default | Description |
|--------------|-----------------|-------------------|---------------------------------------------------------------------------------------------------------------------------|
| client | Client |
Required
| The
driver.Client
used to perform the insert |
| table | str |
Required
| The ClickHouse table to insert into. The full table name (including database) is permitted. |
| file_path | str |
Required
| The native file system path to the data file |
| fmt | str | CSV, CSVWithNames | The ClickHouse Input Format of the file. CSVWithNames is assumed if
column_names
is not provided |
| column_names | Sequence of str |
None
| A list of column names in the data file. Not required for formats that include column names |
| database | str |
None
| Database of the table. Ignored if the table is fully qualified. If not specified, the insert will use the client database |
| settings | dict |
None
| See
settings description
. |
| compression | str |
None
| A recognized ClickHouse compression type (zstd, lz4, gzip) used for the Content-Encoding HTTP header |
For files with inconsistent data or date/time values in an unusual format, settings that apply to data imports (such as
input_format_allow_errors_num
and
input_format_allow_errors_num
) are recognized for this method.
```python
import clickhouse_connect
from clickhouse_connect.driver.tools import insert_file
client = clickhouse_connect.get_client()
insert_file(client, 'example_table', 'my_data.csv',
settings={'input_format_allow_errors_ratio': .2,
'input_format_allow_errors_num': 5})
``` | {"source_file": "advanced-inserting.md"} | [
-0.0005824867403134704,
-0.03332158923149109,
-0.12375921756029129,
0.02682046964764595,
-0.1035052090883255,
0.019171463325619698,
0.030128998681902885,
0.025187982246279716,
-0.03856927901506424,
0.02255917340517044,
0.06785313785076141,
-0.09197982400655746,
0.12386184185743332,
-0.0508... |
dc566e53-3dc9-44a0-8c01-dad39587da91 | sidebar_label: 'SQLAlchemy'
sidebar_position: 7
keywords: ['clickhouse', 'python', 'sqlalchemy', 'integrate']
description: 'ClickHouse SQLAlchemy Support'
slug: /integrations/language-clients/python/sqlalchemy
title: 'SQLAlchemy Support'
doc_type: 'reference'
ClickHouse Connect includes a SQLAlchemy dialect (
clickhousedb
) built on top of the core driver. It targets SQLAlchemy Core APIs and supports SQLAlchemy 1.4.40+ and 2.0.x.
Connect with SQLAlchemy {#sqlalchemy-connect}
Create an engine using either
clickhousedb://
or
clickhousedb+connect://
URLs. Query parameters map to ClickHouse settings, client options, and HTTP/TLS transport options.
```python
from sqlalchemy import create_engine, text
engine = create_engine(
"clickhousedb://user:password@host:8123/mydb?compression=zstd"
)
with engine.begin() as conn:
rows = conn.execute(text("SELECT version()"))
print(rows.scalar())
```
Notes on URL/query parameters:
- ClickHouse settings: pass as query parameters (for example,
use_skip_indexes=0
).
- Client options:
compression
(alias for
compress
),
query_limit
, timeouts, and more.
- HTTP/TLS options: options for the HTTP pool and TLS (for example,
ch_http_max_field_name_size=99999
,
ca_cert=certifi
).
See
Connection arguments and Settings
in the sections below for the full list of supported options. These can also be supplied via the SQLAlchemy DSN.
Core queries {#sqlalchemy-core-queries}
The dialect supports SQLAlchemy Core
SELECT
queries with joins, filters, ordering, limits/offsets, and
DISTINCT
.
```python
from sqlalchemy import MetaData, Table, select
metadata = MetaData(schema="mydb")
users = Table("users", metadata, autoload_with=engine)
orders = Table("orders", metadata, autoload_with=engine)
Basic SELECT
with engine.begin() as conn:
rows = conn.execute(select(users.c.id, users.c.name).order_by(users.c.id).limit(10)).fetchall()
JOINs (INNER/LEFT OUTER/FULL OUTER/CROSS)
with engine.begin() as conn:
stmt = (
select(users.c.name, orders.c.product)
.select_from(users.join(orders, users.c.id == orders.c.user_id))
)
rows = conn.execute(stmt).fetchall()
```
Lightweight
DELETE
with a required
WHERE
clause is supported:
```python
from sqlalchemy import delete
with engine.begin() as conn:
conn.execute(delete(users).where(users.c.name.like("%temp%")))
```
DDL and reflection {#sqlalchemy-ddl-reflection}
You can create databases and tables using the provided DDL helpers and type/engine constructs. Table reflection (including column types and engine) is supported.
```python
import sqlalchemy as db
from sqlalchemy import MetaData
from clickhouse_connect.cc_sqlalchemy.ddl.custom import CreateDatabase, DropDatabase
from clickhouse_connect.cc_sqlalchemy.ddl.tableengine import MergeTree
from clickhouse_connect.cc_sqlalchemy.datatypes.sqltypes import UInt32, String, DateTime64 | {"source_file": "sqlalchemy.md"} | [
-0.013024838641285896,
-0.06132502108812332,
-0.08103258162736893,
0.04322221875190735,
-0.08707565069198608,
-0.06188341602683067,
0.03920846804976463,
0.02364172600209713,
-0.1149633526802063,
-0.06598424166440964,
-0.038753122091293335,
0.022519521415233612,
0.11845008283853531,
-0.0158... |
4c95d483-e0ce-4825-a4ca-45bb2bfdb39f | with engine.begin() as conn:
# Databases
conn.execute(CreateDatabase("example_db", exists_ok=True))
# Tables
metadata = MetaData(schema="example_db")
table = db.Table(
"events",
metadata,
db.Column("id", UInt32, primary_key=True),
db.Column("user", String),
db.Column("created_at", DateTime64(3)),
MergeTree(order_by="id"),
)
table.create(conn)
# Reflection
reflected = db.Table("events", metadata, autoload_with=engine)
assert reflected.engine is not None
```
Reflected columns include dialect-specific attributes such as
clickhousedb_default_type
,
clickhousedb_codec_expression
, and
clickhousedb_ttl_expression
when present on the server.
Inserts (Core and basic ORM) {#sqlalchemy-inserts}
Inserts work via SQLAlchemy Core as well as with simple ORM models for convenience.
```python
Core insert
with engine.begin() as conn:
conn.execute(table.insert().values(id=1, user="joe"))
Basic ORM insert
from sqlalchemy.orm import declarative_base, Session
Base = declarative_base(metadata=MetaData(schema="example_db"))
class User(Base):
tablename
= "users"
table_args
= (MergeTree(order_by=["id"]),)
id = db.Column(UInt32, primary_key=True)
name = db.Column(String)
Base.metadata.create_all(engine)
with Session(engine) as session:
session.add(User(id=1, name="Alice"))
session.bulk_save_objects([User(id=2, name="Bob")])
session.commit()
```
Scope and limitations {#scope-and-limitations}
Core focus: Enable SQLAlchemy Core features like
SELECT
with
JOIN
s (
INNER
,
LEFT OUTER
,
FULL OUTER
,
CROSS
),
WHERE
,
ORDER BY
,
LIMIT
/
OFFSET
, and
DISTINCT
.
DELETE
with
WHERE
only: The dialect supports lightweight
DELETE
but requires an explicit
WHERE
clause to avoid accidental full-table deletes. To clear a table, use
TRUNCATE TABLE
.
No
UPDATE
: ClickHouse is append-optimized. The dialect does not implement
UPDATE
. If you need to change data, apply transformations upstream and re-insert, or use explicit text SQL (for example,
ALTER TABLE ... UPDATE
) at your own risk.
DDL and reflection: Creating databases and tables is supported, and reflection returns column types and table engine metadata. Traditional PK/FK/index metadata is not present because ClickHouse does not enforce those constraints.
ORM scope: Declarative models and inserts via
Session.add(...)
/
bulk_save_objects(...)
work for convenience. Advanced ORM features (relationship management, unit-of-work updates, cascading, eager/lazy loading semantics) are not supported.
Primary key semantics:
Column(..., primary_key=True)
is used by SQLAlchemy for object identity only. It does not create a server-side constraint in ClickHouse. Define
ORDER BY
(and optional
PRIMARY KEY
) via table engines (for example,
MergeTree(order_by=...)
). | {"source_file": "sqlalchemy.md"} | [
-0.0038758227601647377,
-0.07588138431310654,
-0.05607587844133377,
0.04574494808912277,
-0.0762137696146965,
-0.08883555978536606,
-0.0019061649218201637,
0.012090845964848995,
-0.06587543338537216,
0.011500939726829529,
0.027319058775901794,
-0.03283055126667023,
0.06634580343961716,
-0.... |
272c5847-3a0a-47dd-97c2-bfba9e58b3a4 | Transactions and server features: Two-phase transactions, sequences,
RETURNING
, and advanced isolation levels are not supported.
engine.begin()
provides a Python context manager for grouping statements but performs no actual transaction control (commit/rollback are no-ops). | {"source_file": "sqlalchemy.md"} | [
-0.08006508648395538,
-0.03859934210777283,
-0.06372334063053131,
0.06168222799897194,
-0.05575733259320259,
-0.04303085431456566,
0.0013605505228042603,
-0.051404569298028946,
0.02671508863568306,
-0.04066172614693642,
0.08044490963220596,
0.05247000977396965,
-0.026461811736226082,
-0.02... |
e1a2f355-2f42-457c-8b8d-b838e431f79a | sidebar_label: 'Driver API'
sidebar_position: 2
keywords: ['clickhouse', 'python', 'driver', 'api', 'client']
description: 'ClickHouse Connect Driver API'
slug: /integrations/language-clients/python/driver-api
title: 'ClickHouse Connect Driver API'
doc_type: 'reference'
ClickHouse Connect driver API {#clickhouse-connect-driver-api}
:::note
Passing keyword arguments is recommended for most api methods given the number of possible arguments, most of which are optional.
Methods not documented here are not considered part of the API, and may be removed or changed.
:::
Client Initialization {#client-initialization}
The
clickhouse_connect.driver.client
class provides the primary interface between a Python application and the ClickHouse database server. Use the
clickhouse_connect.get_client
function to obtain a Client instance, which accepts the following arguments:
Connection arguments {#connection-arguments} | {"source_file": "driver-api.md"} | [
-0.030699055641889572,
0.03270450234413147,
-0.06090985983610153,
0.018601568415760994,
-0.06024380773305893,
-0.019489800557494164,
0.03660418093204498,
0.0356382355093956,
-0.11024416983127594,
-0.09739255160093307,
0.03706304728984833,
0.014225604012608528,
0.0745292380452156,
-0.035833... |
1f288010-d839-4f49-b196-409283ed4e59 | | Parameter | Type | Default | Description |
|--------------------------|-------------|-------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| interface | str | http | Must be http or https. |
| host | str | localhost | The hostname or IP address of the ClickHouse server. If not set,
localhost
will be used. |
| port | int | 8123 or 8443 | The ClickHouse HTTP or HTTPS port. If not set will default to 8123, or to 8443 if
secure
=
True
or
interface
=
https
. |
| username | str | default | The ClickHouse user name. If not set, the
default
ClickHouse user will be used. |
| password | str |
<empty string>
| The password for
username
. |
| database | str |
None
| The default database for the connection. If not set, ClickHouse Connect will use the default database for
username
. |
| secure | bool | False | Use HTTPS/TLS. This overrides inferred values from the interface or port arguments. |
| dsn | str |
None | {"source_file": "driver-api.md"} | [
0.03499232977628708,
0.05151437595486641,
-0.03347549960017204,
0.01007634587585926,
-0.08359705656766891,
0.042122550308704376,
0.027290377765893936,
0.04243069514632225,
0.0028166708070784807,
-0.0498807393014431,
0.020550336688756943,
-0.054090991616249084,
-0.012340902350842953,
-0.062... |
cff95196-7fd9-497b-8042-6f2d2adbf4a9 | | dsn | str |
None
| A string in standard DSN (Data Source Name) format. Other connection values (such as host or user) will be extracted from this string if not set otherwise. |
| compress | bool or str | True | Enable compression for ClickHouse HTTP inserts and query results. See
Additional Options (Compression)
|
| query_limit | int | 0 (unlimited) | Maximum number of rows to return for any
query
response. Set this to zero to return unlimited rows. Note that large query limits may result in out of memory exceptions if results are not streamed, as all results are loaded into memory at once. |
| query_retries | int | 2 | Maximum number of retries for a
query
request. Only "retryable" HTTP responses will be retried.
command
or
insert
requests are not automatically retried by the driver to prevent unintended duplicate requests. |
| connect_timeout | int | 10 | HTTP connection timeout in seconds. |
| send_receive_timeout | int | 300 | Send/receive timeout for the HTTP connection in seconds. |
| client_name | str |
None
| client_name prepended to the HTTP User Agent header. Set this to track client queries in the ClickHouse system.query_log. |
| pool_mgr | obj |
<default PoolManager>
| The
urllib3
library PoolManager to use. For advanced use cases requiring multiple connection pools to different hosts. |
| http_proxy | str |
None
| HTTP proxy address (equivalent to setting the HTTP_PROXY environment variable). |
| https_proxy | str |
None | {"source_file": "driver-api.md"} | [
-0.06050261855125427,
-0.013577330857515335,
-0.10453357547521591,
0.0710795670747757,
-0.08160685747861862,
-0.07360489666461945,
-0.051160238683223724,
0.009344304911792278,
-0.026363367214798927,
-0.019071128219366074,
-0.021394414827227592,
0.034676533192396164,
0.034311361610889435,
-... |
b15f52de-0164-4663-b9a1-5fa00a16d9ec | | https_proxy | str |
None
| HTTPS proxy address (equivalent to setting the HTTPS_PROXY environment variable). |
| apply_server_timezone | bool | True | Use server timezone for timezone aware query results. See
Timezone Precedence
|
| show_clickhouse_errors | bool | True | Include detailed ClickHouse server error messages and exception codes in client exceptions. |
| autogenerate_session_id | bool |
None
| Override the global
autogenerate_session_id
setting. If True, automatically generate a UUID4 session ID when none is provided. |
| proxy_path | str | <empty string> | Optional path prefix to add to the ClickHouse server URL for proxy configurations. |
| form_encode_query_params | bool | False | Send query parameters as form-encoded data in the request body instead of URL parameters. Useful for queries with large parameter sets that might exceed URL length limits. |
| rename_response_column | str |
None
| Optional callback function or column name mapping to rename response columns in query results. | | {"source_file": "driver-api.md"} | [
-0.024272486567497253,
0.01524997316300869,
-0.07438433915376663,
-0.012548856437206268,
-0.09832897782325745,
-0.06110097095370293,
-0.0499795638024807,
-0.04645860940217972,
-0.07455770671367645,
-0.015191031619906425,
-0.027524644508957863,
-0.02901727706193924,
0.030231818556785583,
-0... |
72fed02a-fff8-4051-b324-1003a774ce43 | HTTPS/TLS arguments {#httpstls-arguments}
| Parameter | Type | Default | Description |
|------------------|------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| verify | bool | True | Validate the ClickHouse server TLS/SSL certificate (hostname, expiration, etc.) if using HTTPS/TLS. |
| ca_cert | str |
None
| If
verify
=
True
, the file path to Certificate Authority root to validate ClickHouse server certificate, in .pem format. Ignored if verify is False. This is not necessary if the ClickHouse server certificate is a globally trusted root as verified by the operating system. |
| client_cert | str |
None
| File path to a TLS Client certificate in .pem format (for mutual TLS authentication). The file should contain a full certificate chain, including any intermediate certificates. |
| client_cert_key | str |
None
| File path to the private key for the Client Certificate. Required if the private key is not included the Client Certificate key file. |
| server_host_name | str |
None
| The ClickHouse server hostname as identified by the CN or SNI of its TLS certificate. Set this to avoid SSL errors when connecting through a proxy or tunnel with a different hostname |
| tls_mode | str |
None
| Controls advanced TLS behavior.
proxy
and
strict
do not invoke ClickHouse mutual TLS connection, but do send client cert and key.
mutual
assumes ClickHouse mutual TLS auth with a client certificate.
None
/default behavior is
mutual
|
Settings argument {#settings-argument} | {"source_file": "driver-api.md"} | [
-0.019003765657544136,
0.08503501862287521,
0.005065098404884338,
-0.012156755663454533,
-0.09321760386228561,
-0.09741544723510742,
-0.03376298025250435,
-0.03584050014615059,
0.08466028422117233,
-0.0679180771112442,
-0.02089720219373703,
-0.06855718046426773,
0.03869045525789261,
-0.004... |
6ea15df4-6b99-4d3d-ba5e-2bef7d9829f3 | Settings argument {#settings-argument}
Finally, the
settings
argument to
get_client
is used to pass additional ClickHouse settings to the server for each client request. Note that in most cases, users with
readonly
=
1
access cannot alter settings sent with a query, so ClickHouse Connect will drop such settings in the final request and log a warning. The following settings apply only to HTTP queries/sessions used by ClickHouse Connect, and are not documented as general ClickHouse settings.
| Setting | Description |
|-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| buffer_size | Buffer size (in bytes) used by the ClickHouse server before writing to the HTTP channel. |
| session_id | A unique session ID to associate related queries on the server. Required for temporary tables. |
| compress | Whether the ClickHouse server should compress the POST response data. This setting should only be used for "raw" queries. |
| decompress | Whether the data sent to the ClickHouse server must be decompressed. This setting should only be used for "raw" inserts. |
| quota_key | The quota key associated with this request. See the ClickHouse server documentation on quotas. |
| session_check | Used to check the session status. |
| session_timeout | Number of seconds of inactivity before the session identified by the session ID will time out and no longer be considered valid. Defaults to 60 seconds. |
| wait_end_of_query | Buffers the entire response on the ClickHouse server. This setting is required to return summary information, and is set automatically on non-streaming queries. |
| role | ClickHouse role to be used for the session. Valid transport setting that can be included in query context. |
For other ClickHouse settings that can be sent with each query, see
the ClickHouse documentation
.
Client creation examples {#client-creation-examples}
Without any parameters, a ClickHouse Connect client will connect to the default HTTP port on
localhost
with the default user and no password:
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
print(client.server_version)
Output: '22.10.1.98'
``` | {"source_file": "driver-api.md"} | [
0.005230261478573084,
0.024288715794682503,
-0.07449445873498917,
0.05389169603586197,
-0.09597151726484299,
-0.03344239667057991,
0.03998967260122299,
-0.019466714933514595,
-0.029390016570687294,
-0.03886514902114868,
0.0012738548684865236,
-0.006422265432775021,
0.0635770633816719,
-0.0... |
f2506cd7-ca61-4eb9-b22f-8aefac0fe45f | ```python
import clickhouse_connect
client = clickhouse_connect.get_client()
print(client.server_version)
Output: '22.10.1.98'
```
Connecting to a secure (HTTPS) external ClickHouse server
```python
import clickhouse_connect
client = clickhouse_connect.get_client(host='play.clickhouse.com', secure=True, port=443, user='play', password='clickhouse')
print(client.command('SELECT timezone()'))
Output: 'Etc/UTC'
```
Connecting with a session ID and other custom connection parameters and ClickHouse settings.
```python
import clickhouse_connect
client = clickhouse_connect.get_client(
host='play.clickhouse.com',
user='play',
password='clickhouse',
port=443,
session_id='example_session_1',
connect_timeout=15,
database='github',
settings={'distributed_ddl_task_timeout':300},
)
print(client.database)
Output: 'github'
```
Client Lifecycle and Best Practices {#client-lifecycle-and-best-practices}
Creating a ClickHouse Connect client is an expensive operation that involves establishing a connection, retrieving server metadata, and initializing settings. Follow these best practices for optimal performance:
Core principles {#core-principles}
Reuse clients
: Create clients once at application startup and reuse them throughout the application lifetime
Avoid frequent creation
: Don't create a new client for each query or request (this wastes hundreds of milliseconds per operation)
Clean up properly
: Always close clients when shutting down to release connection pool resources
Share when possible
: A single client can handle many concurrent queries through its connection pool (see threading notes below)
Basic patterns {#basic-patterns}
✅ Good: Reuse a single client
```python
import clickhouse_connect
Create once at startup
client = clickhouse_connect.get_client(host='my-host', username='default', password='password')
Reuse for all queries
for i in range(1000):
result = client.query('SELECT count() FROM users')
Close on shutdown
client.close()
```
❌ Bad: Creating clients repeatedly
```python
BAD: Creates 1000 clients with expensive initialization overhead
for i in range(1000):
client = clickhouse_connect.get_client(host='my-host', username='default', password='password')
result = client.query('SELECT count() FROM users')
client.close()
```
Multi-threaded applications {#multi-threaded-applications}
:::warning
Client instances are
NOT thread-safe
when using session IDs. By default, clients have an auto-generated session ID, and concurrent queries within the same session will raise a
ProgrammingError
.
:::
To share a client across threads safely:
```python
import clickhouse_connect
import threading
Option 1: Disable sessions (recommended for shared clients) | {"source_file": "driver-api.md"} | [
0.0475650392472744,
-0.024417245760560036,
-0.09339822828769684,
-0.06091658025979996,
-0.10048727691173553,
-0.041703350841999054,
-0.00846768356859684,
-0.03840749338269234,
0.00411501619964838,
-0.024132460355758667,
-0.019632427021861076,
0.0001991791359614581,
0.007762175519019365,
-0... |
d991cb31-dd57-4015-8a34-6d1626367b23 | To share a client across threads safely:
```python
import clickhouse_connect
import threading
Option 1: Disable sessions (recommended for shared clients)
client = clickhouse_connect.get_client(
host='my-host',
username='default',
password='password',
autogenerate_session_id=False # Required for thread safety
)
def worker(thread_id):
# All threads can now safely use the same client
result = client.query(f"SELECT {thread_id}")
print(f"Thread {thread_id}: {result.result_rows[0][0]}")
threads = [threading.Thread(target=worker, args=(i,)) for i in range(10)]
for t in threads:
t.start()
for t in threads:
t.join()
client.close()
Output:
Thread 0: 0
Thread 7: 7
Thread 1: 1
Thread 9: 9
Thread 4: 4
Thread 2: 2
Thread 8: 8
Thread 5: 5
Thread 6: 6
Thread 3: 3
```
Alternative for sessions:
If you need sessions (e.g., for temporary tables), create a separate client per thread:
python
def worker(thread_id):
# Each thread gets its own client with isolated session
client = clickhouse_connect.get_client(host='my-host', username='default', password='password')
client.command('CREATE TEMPORARY TABLE temp (id UInt32) ENGINE = Memory')
# ... use temp table ...
client.close()
Proper cleanup {#proper-cleanup}
Always close clients at shutdown. Note that
client.close()
disposes the client and closes pooled HTTP connections only when the client owns its pool manager (for example, when created with custom TLS/proxy options). For the default shared pool, use
client.close_connections()
to proactively clear sockets; otherwise, connections are reclaimed automatically via idle expiration and at process exit.
python
client = clickhouse_connect.get_client(host='my-host', username='default', password='password')
try:
result = client.query('SELECT 1')
finally:
client.close()
Or use a context manager:
python
with clickhouse_connect.get_client(host='my-host', username='default', password='password') as client:
result = client.query('SELECT 1')
When to use multiple clients {#when-to-use-multiple-clients}
Multiple clients are appropriate for:
Different servers
: One client per ClickHouse server or cluster
Different credentials
: Separate clients for different users or access levels
Different databases
: When you need to work with multiple databases
Isolated sessions
: When you need separate sessions for temporary tables or session-specific settings
Per-thread isolation
: When threads need independent sessions (as shown above)
Common method arguments {#common-method-arguments}
Several client methods use one or both of the common
parameters
and
settings
arguments. These keyword arguments are described below.
Parameters argument {#parameters-argument} | {"source_file": "driver-api.md"} | [
-0.047509945929050446,
-0.012546972371637821,
-0.046658873558044434,
-0.02312159724533558,
-0.049363087862730026,
-0.06768353283405304,
0.04429401084780693,
-0.06364630907773972,
0.016031861305236816,
0.005394283682107925,
-0.04109247401356697,
0.028655139729380608,
0.024183567613363266,
-... |
9eeb71df-bdf0-473a-acbd-1a35c0cef83f | Several client methods use one or both of the common
parameters
and
settings
arguments. These keyword arguments are described below.
Parameters argument {#parameters-argument}
ClickHouse Connect Client
query*
and
command
methods accept an optional
parameters
keyword argument used for binding Python expressions to a ClickHouse value expression. Two sorts of binding are available.
Server-side binding {#server-side-binding}
ClickHouse supports
server side binding
for most query values, where the bound value is sent separate from the query as an HTTP query parameter. ClickHouse Connect will add the appropriate query parameters if it detects a binding expression of the form
{<name>:<datatype>}
. For server side binding, the
parameters
argument should be a Python dictionary.
Server-side binding with Python dictionary, DateTime value, and string value
```python
import datetime
my_date = datetime.datetime(2022, 10, 1, 15, 20, 5)
parameters = {'table': 'my_table', 'v1': my_date, 'v2': "a string with a single quote'"}
client.query('SELECT * FROM {table:Identifier} WHERE date >= {v1:DateTime} AND string ILIKE {v2:String}', parameters=parameters)
```
This generates the following query on the server:
sql
SELECT *
FROM my_table
WHERE date >= '2022-10-01 15:20:05'
AND string ILIKE 'a string with a single quote\''
:::warning
Server-side binding is only supported (by the ClickHouse server) for
SELECT
queries. It does not work for
ALTER
,
DELETE
,
INSERT
, or other types of queries. This may change in the future; see https://github.com/ClickHouse/ClickHouse/issues/42092.
:::
Client-side binding {#client-side-binding}
ClickHouse Connect also supports client-side parameter binding, which can allow more flexibility in generating templated SQL queries. For client-side binding, the
parameters
argument should be a dictionary or a sequence. Client-side binding uses the Python
"printf" style
string formatting for parameter substitution.
Note that unlike server-side binding, client-side binding does not work for database identifiers such as database, table, or column names, since Python-style formatting cannot distinguish between the different types of strings, and they need to be formatted differently (backticks or double quotes for database identifiers, single quotes for data values).
Example with Python Dictionary, DateTime value and string escaping
```python
import datetime
my_date = datetime.datetime(2022, 10, 1, 15, 20, 5)
parameters = {'v1': my_date, 'v2': "a string with a single quote'"}
client.query('SELECT * FROM my_table WHERE date >= %(v1)s AND string ILIKE %(v2)s', parameters=parameters)
```
This generates the following query on the server:
sql
SELECT *
FROM my_table
WHERE date >= '2022-10-01 15:20:05'
AND string ILIKE 'a string with a single quote\''
Example with Python Sequence (Tuple), Float64, and IPv4Address
```python
import ipaddress | {"source_file": "driver-api.md"} | [
-0.07316069304943085,
-0.018957102671265602,
-0.026643645018339157,
0.08633580058813095,
-0.11716648936271667,
-0.046872060745954514,
0.03621269017457962,
0.06693802773952484,
-0.03519735857844353,
-0.016380181536078453,
-0.02217276580631733,
-0.029835492372512817,
0.053633373230695724,
-0... |
e745798c-23af-41aa-8b29-b29ca1f1fc79 | Example with Python Sequence (Tuple), Float64, and IPv4Address
```python
import ipaddress
parameters = (35200.44, ipaddress.IPv4Address(0x443d04fe))
client.query('SELECT * FROM some_table WHERE metric >= %s AND ip_address = %s', parameters=parameters)
```
This generates the following query on the server:
sql
SELECT *
FROM some_table
WHERE metric >= 35200.44
AND ip_address = '68.61.4.254''
:::note
To bind DateTime64 arguments (ClickHouse types with sub-second precision) requires one of two custom approaches:
- Wrap the Python
datetime.datetime
value in the new DT64Param class, e.g.
```python
query = 'SELECT {p1:DateTime64(3)}' # Server-side binding with dictionary
parameters={'p1': DT64Param(dt_value)}
query = 'SELECT %s as string, toDateTime64(%s,6) as dateTime' # Client-side binding with list
parameters=['a string', DT64Param(datetime.now())]
- If using a dictionary of parameter values, append the string `_64` to the parameter name
python
query = 'SELECT {p1:DateTime64(3)}, {a1:Array(DateTime(3))}' # Server-side binding with dictionary
parameters={'p1_64': dt_value, 'a1_64': [dt_value1, dt_value2]}
```
:::
Settings argument {#settings-argument-1}
All the key ClickHouse Connect Client "insert" and "select" methods accept an optional
settings
keyword argument to pass ClickHouse server
user settings
for the included SQL statement. The
settings
argument should be a dictionary. Each item should be a ClickHouse setting name and its associated value. Note that values will be converted to strings when sent to the server as query parameters.
As with client level settings, ClickHouse Connect will drop any settings that the server marks as
readonly
=
1
, with an associated log message. Settings that apply only to queries via the ClickHouse HTTP interface are always valid. Those settings are described under the
get_client
API
.
Example of using ClickHouse settings:
python
settings = {'merge_tree_min_rows_for_concurrent_read': 65535,
'session_id': 'session_1234',
'use_skip_indexes': False}
client.query("SELECT event_type, sum(timeout) FROM event_errors WHERE event_time > '2022-08-01'", settings=settings)
Client
command
Method {#client-command-method}
Use the
Client.command
method to send SQL queries to the ClickHouse server that do not normally return data or that return a single primitive or array value rather than a full dataset. This method takes the following parameters: | {"source_file": "driver-api.md"} | [
-0.005653642117977142,
-0.009579317644238472,
-0.022607170045375824,
0.04108491167426109,
-0.1322583556175232,
-0.04784930869936943,
0.0007907964172773063,
-0.021133892238140106,
-0.0015260705258697271,
-0.013040577061474323,
-0.028510665521025658,
-0.05656181648373604,
0.0026242099702358246... |
d4dbe575-63ee-40ae-a5d5-87175aa7b004 | | Parameter | Type | Default | Description |
|---------------|------------------|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| cmd | str |
Required
| A ClickHouse SQL statement that returns a single value or a single row of values. |
| parameters | dict or iterable |
None
| See
parameters description
. |
| data | str or bytes |
None
| Optional data to include with the command as the POST body. |
| settings | dict |
None
| See
settings description
. |
| use_database | bool | True | Use the client database (specified when creating the client). False means the command will use the default ClickHouse server database for the connected user. |
| external_data | ExternalData |
None
| An
ExternalData
object containing file or binary data to use with the query. See
Advanced Queries (External Data)
|
Command examples {#command-examples}
DDL statements {#ddl-statements}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Create a table
result = client.command("CREATE TABLE test_command (col_1 String, col_2 DateTime) ENGINE MergeTree ORDER BY tuple()")
print(result) # Returns QuerySummary with query_id
Show table definition
result = client.command("SHOW CREATE TABLE test_command")
print(result)
Output:
CREATE TABLE default.test_command
(
col_1
String,
col_2
DateTime
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS index_granularity = 8192
Drop table
client.command("DROP TABLE test_command")
```
Simple queries returning single values {#simple-queries-returning-single-values}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Single value result
count = client.command("SELECT count() FROM system.tables")
print(count)
Output: 151
Server version
version = client.command("SELECT version()")
print(version)
Output: "25.8.2.29"
```
Commands with parameters {#commands-with-parameters}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Using client-side parameters
table_name = "system"
result = client.command(
"SELECT count() FROM system.tables WHERE database = %(db)s",
parameters={"db": table_name}
) | {"source_file": "driver-api.md"} | [
0.012592684477567673,
0.04801523685455322,
-0.04700368270277977,
0.047229163348674774,
-0.10644417256116867,
0.01349152997136116,
0.0999564379453659,
0.032506298273801804,
-0.058167070150375366,
-0.00893375277519226,
0.0560963898897171,
-0.1437196135520935,
0.05448371544480324,
-0.03184879... |
99d85dce-bff3-436b-a750-78e2f7274910 | Using client-side parameters
table_name = "system"
result = client.command(
"SELECT count() FROM system.tables WHERE database = %(db)s",
parameters={"db": table_name}
)
Using server-side parameters
result = client.command(
"SELECT count() FROM system.tables WHERE database = {db:String}",
parameters={"db": "system"}
)
```
Commands with settings {#commands-with-settings}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Execute command with specific settings
result = client.command(
"OPTIMIZE TABLE large_table FINAL",
settings={"optimize_throw_if_noop": 1}
)
```
Client
query
Method {#client-query-method}
The
Client.query
method is the primary way to retrieve a single "batch" dataset from the ClickHouse server. It utilizes the Native ClickHouse format over HTTP to transmit large datasets (up to approximately one million rows) efficiently. This method takes the following parameters: | {"source_file": "driver-api.md"} | [
0.03199150040745735,
-0.011169526726007462,
-0.0962318405508995,
0.0961856096982956,
-0.12118826806545258,
-0.08812073618173599,
0.0610855370759964,
0.04758710041642189,
-0.0782555565237999,
0.020034309476614,
0.03849603608250618,
0.010348460637032986,
0.11569159477949142,
-0.0992895513772... |
a1dfc20b-c2c5-4ae9-b221-8f45bc9dedcb | | Parameter | Type | Default | Description |
|---------------------|------------------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| query | str |
Required
| The ClickHouse SQL SELECT or DESCRIBE query. |
| parameters | dict or iterable |
None
| See
parameters description
. |
| settings | dict |
None
| See
settings description
. |
| query_formats | dict |
None
| Datatype formatting specification for result values. See Advanced Usage (Read Formats) |
| column_formats | dict |
None
| Datatype formatting per column. See Advanced Usage (Read Formats) |
| encoding | str |
None
| Encoding used to encode ClickHouse String columns into Python strings. Python defaults to
UTF-8
if not set. |
| use_none | bool | True | Use Python
None
type for ClickHouse nulls. If False, use a datatype default (such as 0) for ClickHouse nulls. Note - defaults to False for NumPy/Pandas for performance reasons. |
| column_oriented | bool | False | Return the results as a sequence of columns rather than a sequence of rows. Helpful for transforming Python data to other column oriented data formats. |
| query_tz | str |
None
| A timezone name from the
zoneinfo
database. This timezone will be applied to all datetime or Pandas Timestamp objects returned by the query. |
| column_tzs | dict |
None
| A dictionary of column name to timezone name. Like
query_tz
, but allows specifying different timezones for different columns. |
| use_extended_dtypes | bool | True | Use Pandas extended dtypes (like StringArray), and pandas.NA and pandas.NaT for ClickHouse NULL values. Applies only to
query_df
and | {"source_file": "driver-api.md"} | [
0.021260017529129982,
0.04729067161679268,
-0.026857471093535423,
0.03133001551032066,
-0.09367003291845322,
0.022610589861869812,
0.0035456998739391565,
0.04207440838217735,
0.019403712823987007,
-0.044501449912786484,
0.029866086319088936,
-0.10939890891313553,
0.026621447876095772,
-0.0... |
f1f53e00-a0cc-44c7-924e-65c1ba01dc0a | | use_extended_dtypes | bool | True | Use Pandas extended dtypes (like StringArray), and pandas.NA and pandas.NaT for ClickHouse NULL values. Applies only to
query_df
and
query_df_stream
methods. |
| external_data | ExternalData |
None
| An ExternalData object containing file or binary data to use with the query. See
Advanced Queries (External Data)
|
| context | QueryContext |
None
| A reusable QueryContext object can be used to encapsulate the above method arguments. See
Advanced Queries (QueryContexts)
| | {"source_file": "driver-api.md"} | [
0.016673007979989052,
-0.03165837377309799,
-0.10797716677188873,
-0.00039861261029727757,
0.04243101924657822,
-0.053606051951646805,
0.06734555214643478,
-0.010717042721807957,
-0.07286646962165833,
0.034437790513038635,
0.0776529610157013,
-0.07863947749137878,
-0.04578549787402153,
-0.... |
2d6a7576-69d0-4aa3-9337-3d425cb5237a | Query examples {#query-examples}
Basic query {#basic-query}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Simple SELECT query
result = client.query("SELECT name, database FROM system.tables LIMIT 3")
Access results as rows
for row in result.result_rows:
print(row)
Output:
('CHARACTER_SETS', 'INFORMATION_SCHEMA')
('COLLATIONS', 'INFORMATION_SCHEMA')
('COLUMNS', 'INFORMATION_SCHEMA')
Access column names and types
print(result.column_names)
Output: ("name", "database")
print([col_type.name for col_type in result.column_types])
Output: ['String', 'String']
```
Accessing query results {#accessing-query-results}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
result = client.query("SELECT number, toString(number) AS str FROM system.numbers LIMIT 3")
Row-oriented access (default)
print(result.result_rows)
Output: [[0, "0"], [1, "1"], [2, "2"]]
Column-oriented access
print(result.result_columns)
Output: [[0, 1, 2], ["0", "1", "2"]]
Named results (list of dictionaries)
for row_dict in result.named_results():
print(row_dict)
Output:
{"number": 0, "str": "0"}
{"number": 1, "str": "1"}
{"number": 2, "str": "2"}
First row as dictionary
print(result.first_item)
Output: {"number": 0, "str": "0"}
First row as tuple
print(result.first_row)
Output: (0, "0")
```
Query with client-side parameters {#query-with-client-side-parameters}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Using dictionary parameters (printf-style)
query = "SELECT * FROM system.tables WHERE database = %(db)s AND name LIKE %(pattern)s"
parameters = {"db": "system", "pattern": "%query%"}
result = client.query(query, parameters=parameters)
Using tuple parameters
query = "SELECT * FROM system.tables WHERE database = %s LIMIT %s"
parameters = ("system", 5)
result = client.query(query, parameters=parameters)
```
Query with server-side parameters {#query-with-server-side-parameters}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Server-side binding (more secure, better performance for SELECT queries)
query = "SELECT * FROM system.tables WHERE database = {db:String} AND name = {tbl:String}"
parameters = {"db": "system", "tbl": "query_log"}
result = client.query(query, parameters=parameters)
```
Query with settings {#query-with-settings}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Pass ClickHouse settings with the query
result = client.query(
"SELECT sum(number) FROM numbers(1000000)",
settings={
"max_block_size": 100000,
"max_execution_time": 30
}
)
```
The
QueryResult
object {#the-queryresult-object}
The base
query
method returns a
QueryResult
object with the following public properties: | {"source_file": "driver-api.md"} | [
0.06695118546485901,
0.023906074464321136,
-0.10786009579896927,
0.01870659925043583,
-0.11674077063798904,
-0.043058283627033234,
0.06364641338586807,
0.026703165844082832,
-0.04625459760427475,
-0.04009293019771576,
0.03320338949561119,
0.03169502690434456,
0.06586454063653946,
-0.114653... |
307d232d-3b22-4422-a183-96a6639b1bda | The
QueryResult
object {#the-queryresult-object}
The base
query
method returns a
QueryResult
object with the following public properties:
result_rows
-- A matrix of the data returned in the form of a Sequence of rows, with each row element being a sequence of column values.
result_columns
-- A matrix of the data returned in the form of a Sequence of columns, with each column element being a sequence of the row values for that column
column_names
-- A tuple of strings representing the column names in the
result_set
column_types
-- A tuple of ClickHouseType instances representing the ClickHouse data type for each column in the
result_columns
query_id
-- The ClickHouse query_id (useful for examining the query in the
system.query_log
table)
summary
-- Any data returned by the
X-ClickHouse-Summary
HTTP response header
first_item
-- A convenience property for retrieving the first row of the response as a dictionary (keys are column names)
first_row
-- A convenience property to return the first row of the result
column_block_stream
-- A generator of query results in column oriented format. This property should not be referenced directly (see below).
row_block_stream
-- A generator of query results in row oriented format. This property should not be referenced directly (see below).
rows_stream
-- A generator of query results that yields a single row per invocation. This property should not be referenced directly (see below).
summary
-- As described under the
command
method, a dictionary of summary information returned by ClickHouse
The
*_stream
properties return a Python Context that can be used as an iterator for the returned data. They should only be accessed indirectly using the Client
*_stream
methods.
The complete details of streaming query results (using StreamContext objects) are outlined in
Advanced Queries (Streaming Queries)
.
Consuming query results with NumPy, Pandas or Arrow {#consuming-query-results-with-numpy-pandas-or-arrow}
ClickHouse Connect provides specialized query methods for NumPy, Pandas, and Arrow data formats. For detailed information on using these methods, including examples, streaming capabilities, and advanced type handling, see
Advanced Querying (NumPy, Pandas and Arrow Queries)
.
Client streaming query methods {#client-streaming-query-methods}
For streaming large result sets, ClickHouse Connect provides multiple streaming methods. See
Advanced Queries (Streaming Queries)
for details and examples.
Client
insert
Method {#client-insert-method}
For the common use case of inserting multiple records into ClickHouse, there is the
Client.insert
method. It takes the following parameters: | {"source_file": "driver-api.md"} | [
-0.021436994895339012,
-0.0030901951249688864,
-0.09600533545017242,
0.07249932736158371,
-0.04314592480659485,
-0.042975787073373795,
0.051772136241197586,
0.02370787411928177,
-0.0321161225438118,
0.0024566829670220613,
0.007185156457126141,
-0.031027140095829964,
0.05230668932199478,
-0... |
cedf34f7-d595-48a5-94ff-57076c4ad355 | | Parameter | Type | Default | Description |
|--------------------|-----------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| table | str |
Required
| The ClickHouse table to insert into. The full table name (including database) is permitted. |
| data | Sequence of Sequences |
Required
| The matrix of data to insert, either a Sequence of rows, each of which is a sequence of column values, or a Sequence of columns, each of which is a sequence of row values. |
| column_names | Sequence of str, or str | '
' | A list of column_names for the data matrix. If '
' is used instead, ClickHouse Connect will execute a "pre-query" to retrieve all of the column names for the table. |
| database | str | '' | The target database of the insert. If not specified, the database for the client will be assumed. |
| column_types | Sequence of ClickHouseType |
None
| A list of ClickHouseType instances. If neither column_types or column_type_names is specified, ClickHouse Connect will execute a "pre-query" to retrieve all the column types for the table. |
| column_type_names | Sequence of ClickHouse type names |
None
| A list of ClickHouse datatype names. If neither column_types or column_type_names is specified, ClickHouse Connect will execute a "pre-query" to retrieve all the column types for the table. |
| column_oriented | bool | False | If True, the
data
argument is assumed to be a Sequence of columns (and no "pivot" will be necessary to insert the data). Otherwise
data
is interpreted as a Sequence of rows. |
| settings | dict |
None
| See
settings description
. |
| context | InsertContext |
None
| A reusable InsertContext object can be used to encapsulate the above method arguments. See
Advanced Inserts (InsertContexts)
|
| transport_settings | dict |
None | {"source_file": "driver-api.md"} | [
0.03516434505581856,
0.05118449032306671,
-0.03394487872719765,
0.009712180122733116,
-0.08392764627933502,
0.0417565181851387,
0.027765460312366486,
0.04239678010344505,
0.0023341961205005646,
-0.05041023716330528,
0.021036097779870033,
-0.054108526557683945,
-0.0121467849239707,
-0.06290... |
21143c99-d662-41f8-b68d-ba52084de0d7 | Advanced Inserts (InsertContexts)
|
| transport_settings | dict |
None
| Optional dictionary of transport-level settings (HTTP headers, etc.) | | {"source_file": "driver-api.md"} | [
0.01609276421368122,
-0.023082785308361053,
-0.10294351726770401,
0.06790834665298462,
-0.039419032633304596,
-0.040609151124954224,
0.007726551964879036,
0.03560807555913925,
-0.09520383924245834,
-0.004698386415839195,
0.06178011745214462,
-0.06637584418058395,
0.09060035645961761,
-0.00... |
a94490e4-c0a5-47f4-aaaa-bb83275d3d1d | This method returns a "query summary" dictionary as described under the "command" method. An exception will be raised if the insert fails for any reason.
For specialized insert methods that work with Pandas DataFrames, PyArrow Tables, and Arrow-backed DataFrames, see
Advanced Inserting (Specialized Insert Methods)
.
:::note
A NumPy array is a valid Sequence of Sequences and can be used as the
data
argument to the main
insert
method, so a specialized method is not required.
:::
Examples {#examples}
The examples below assume an existing table
users
with schema
(id UInt32, name String, age UInt8)
.
Basic row-oriented insert {#basic-row-oriented-insert}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Row-oriented data: each inner list is a row
data = [
[1, "Alice", 25],
[2, "Bob", 30],
[3, "Joe", 28],
]
client.insert("users", data, column_names=["id", "name", "age"])
```
Column-oriented insert {#column-oriented-insert}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Column-oriented data: each inner list is a column
data = [
[1, 2, 3], # id column
["Alice", "Bob", "Joe"], # name column
[25, 30, 28], # age column
]
client.insert("users", data, column_names=["id", "name", "age"], column_oriented=True)
```
Insert with explicit column types {#insert-with-explicit-column-types}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Useful when you want to avoid a DESCRIBE query to the server
data = [
[1, "Alice", 25],
[2, "Bob", 30],
[3, "Joe", 28],
]
client.insert(
"users",
data,
column_names=["id", "name", "age"],
column_type_names=["UInt32", "String", "UInt8"],
)
```
Insert into specific database {#insert-into-specific-database}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
data = [
[1, "Alice", 25],
[2, "Bob", 30],
]
Insert into a table in a specific database
client.insert(
"users",
data,
column_names=["id", "name", "age"],
database="production",
)
```
File Inserts {#file-inserts}
For inserting data directly from files into ClickHouse tables, see
Advanced Inserting (File Inserts)
.
Raw API {#raw-api}
For advanced use cases requiring direct access to ClickHouse HTTP interfaces without type transformations, see
Advanced Usage (Raw API)
.
Utility classes and functions {#utility-classes-and-functions}
The following classes and functions are also considered part of the "public"
clickhouse-connect
API and are, like the classes and methods documented above, stable across minor releases. Breaking changes to these classes and functions will only occur with a minor (not patch) release and will be available with a deprecated status for at least one minor release.
Exceptions {#exceptions} | {"source_file": "driver-api.md"} | [
0.014974110759794712,
-0.03292694315314293,
-0.09456661343574524,
-0.0030789843294769526,
-0.08496299386024475,
-0.061150867491960526,
0.01384126115590334,
-0.01374317891895771,
-0.107016421854496,
0.012672184035182,
0.10002284497022629,
0.030751578509807587,
0.03688441589474678,
-0.045544... |
f8c297cd-4fd1-48c9-bdca-84327fa0da3b | Exceptions {#exceptions}
All custom exceptions (including those defined in the DB API 2.0 specification) are defined in the
clickhouse_connect.driver.exceptions
module. Exceptions actually detected by the driver will use one of these types.
ClickHouse SQL utilities {#clickhouse-sql-utilities}
The functions and the DT64Param class in the
clickhouse_connect.driver.binding
module can be used to properly build and escape ClickHouse SQL queries. Similarly, the functions in the
clickhouse_connect.driver.parser
module can be used to parse ClickHouse datatype names.
Multithreaded, multiprocess, and async/event driven use cases {#multithreaded-multiprocess-and-asyncevent-driven-use-cases}
For information on using ClickHouse Connect in multithreaded, multiprocess, and async/event-driven applications, see
Advanced Usage (Multithreaded, multiprocess, and async/event driven use cases)
.
AsyncClient wrapper {#asyncclient-wrapper}
For information on using the AsyncClient wrapper for asyncio environments, see
Advanced Usage (AsyncClient wrapper)
.
Managing ClickHouse Session IDs {#managing-clickhouse-session-ids}
For information on managing ClickHouse session IDs in multi-threaded or concurrent applications, see
Advanced Usage (Managing ClickHouse Session IDs)
.
Customizing the HTTP connection pool {#customizing-the-http-connection-pool}
For information on customizing the HTTP connection pool for large multi-threaded applications, see
Advanced Usage (Customizing the HTTP connection pool)
. | {"source_file": "driver-api.md"} | [
-0.0010606938740238547,
-0.04224880784749985,
-0.0317859910428524,
0.05674099922180176,
-0.06366227567195892,
-0.04129597172141075,
0.003671149490401149,
0.007718196138739586,
-0.08401060849428177,
-0.0458407886326313,
-0.005247277207672596,
-0.04306884482502937,
0.05536724627017975,
-0.08... |
c2eb124e-cde1-4782-aac8-a1d805c05f58 | keywords: ['clickhouse', 'python', 'client', 'connect', 'integrate']
slug: /integrations/python
description: 'The ClickHouse Connect project suite for connecting Python to ClickHouse'
title: 'Python Integration with ClickHouse Connect'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'language_client'
- website: 'https://github.com/ClickHouse/clickhouse-connect'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CodeBlock from '@theme/CodeBlock';
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
Introduction {#introduction}
ClickHouse Connect is a core database driver providing interoperability with a wide range of Python applications.
The main interface is the
Client
object in the package
clickhouse_connect.driver
. That core package also includes assorted helper classes and utility functions used for communicating with the ClickHouse server and "context" implementations for advanced management of insert and select queries.
The
clickhouse_connect.datatypes
package provides a base implementation and subclasses for all non-experimental ClickHouse datatypes. Its primary functionality is serialization and deserialization of ClickHouse data into the ClickHouse "Native" binary columnar format, used to achieve the most efficient transport between ClickHouse and client applications.
The Cython/C classes in the
clickhouse_connect.cdriver
package optimize some of the most common serializations and deserializations for significantly improved performance over pure Python.
There is a
SQLAlchemy
dialect in the package
clickhouse_connect.cc_sqlalchemy
which is built off of the
datatypes
and
dbi
packages. This implementation supports SQLAlchemy Core functionality including
SELECT
queries with
JOIN
s (
INNER
,
LEFT OUTER
,
FULL OUTER
,
CROSS
),
WHERE
clauses,
ORDER BY
,
LIMIT
/
OFFSET
,
DISTINCT
operations, lightweight
DELETE
statements with
WHERE
conditions, table reflection, and basic DDL operations (
CREATE TABLE
,
CREATE
/
DROP DATABASE
). While it does not support advanced ORM features or advanced DDL features, it provides robust query capabilities suitable for most analytical workloads against ClickHouse's OLAP-oriented database.
The core driver and
ClickHouse Connect SQLAlchemy
implementation are the preferred method for connecting ClickHouse to Apache Superset. Use the
ClickHouse Connect
database connection, or
clickhousedb
SQLAlchemy dialect connection string.
This documentation is current as of the clickhouse-connect release 0.9.2. | {"source_file": "index.md"} | [
-0.01151339989155531,
-0.0360354520380497,
-0.0901934802532196,
0.017124729230999947,
-0.08058059960603714,
-0.05137604475021362,
0.06510347872972488,
0.031913939863443375,
-0.11797573417425156,
-0.024216046556830406,
-0.005966278724372387,
-0.001422137371264398,
0.07909394800662994,
-0.01... |
9d4e21ac-bdaa-4819-9a07-ff43392c6532 | This documentation is current as of the clickhouse-connect release 0.9.2.
:::note
The official ClickHouse Connect Python driver uses the HTTP protocol for communication with the ClickHouse server. This enables HTTP load balancer support and works well in enterprise environments with firewalls and proxies, but has slightly lower compression and performance compared to the native TCP-based protocol, and lacks support for some advanced features like query cancellation. For some use cases, you may consider using one of the
Community Python drivers
that use the native TCP-based protocol.
:::
Requirements and compatibility {#requirements-and-compatibility}
| Python | | Platform¹ | | ClickHouse | | SQLAlchemy² | | Apache Superset | | Pandas | | Polars | |
|-------------:|:--|----------------:|:--|----------------:|:---|------------:|:--|----------------:|:--|--------:|:--|-------:|:--|
| 2.x, <3.9 | ❌ | Linux (x86) | ✅ | <25.x³ | 🟡 | <1.4.40 | ❌ | <1.4 | ❌ | ≥1.5 | ✅ | 1.x | ✅ |
| 3.9.x | ✅ | Linux (Aarch64) | ✅ | 25.x³ | 🟡 | ≥1.4.40 | ✅ | 1.4.x | ✅ | 2.x | ✅ | | |
| 3.10.x | ✅ | macOS (x86) | ✅ | 25.3.x (LTS) | ✅ | ≥2.x | ✅ | 1.5.x | ✅ | | | | |
| 3.11.x | ✅ | macOS (ARM) | ✅ | 25.6.x (Stable) | ✅ | | | 2.0.x | ✅ | | | | |
| 3.12.x | ✅ | Windows | ✅ | 25.7.x (Stable) | ✅ | | | 2.1.x | ✅ | | | | |
| 3.13.x | ✅ | | | 25.8.x (LTS) | ✅ | | | 3.0.x | ✅ | | | | |
| | | | | 25.9.x (Stable) | ✅ | | | | | | | | |
¹ClickHouse Connect has been explicitly tested against the listed platforms. In addition, untested binary wheels (with C optimization) are built for all architectures supported by the excellent
cibuildwheel
project. Finally, because ClickHouse Connect can also run as pure Python, the source installation should work on any recent Python installation.
²SQLAlchemy support is limited to Core functionality (queries, basic DDL). ORM features are not supported. See
SQLAlchemy Integration Support
docs for details.
³ClickHouse Connect generally works well with versions outside the officially supported range.
Installation {#installation}
Install ClickHouse Connect from
PyPI
via pip:
pip install clickhouse-connect
ClickHouse Connect can also be installed from source:
*
git clone
the
GitHub repository
.
* (Optional) run
pip install cython
to build and enable the C/Cython optimizations
*
cd
to the project root directory and run
pip install .
Support policy {#support-policy} | {"source_file": "index.md"} | [
-0.08470720797777176,
-0.024300139397382736,
-0.08504058420658112,
0.017991529777646065,
-0.08485700935125351,
-0.12294312566518784,
0.0013049650005996227,
-0.04445257410407066,
-0.04577872157096863,
-0.05301488935947418,
-0.0048577371053397655,
0.05925150215625763,
0.027120644226670265,
-... |
b783225d-91a7-47c2-914e-902cb1403f23 | Support policy {#support-policy}
Please update to the latest version of ClickHouse Connect before reporting any issues. Issues should be filed in the
GitHub project
. Future releases of ClickHouse Connect are intended be compatible with actively supported ClickHouse versions at the time of release. Actively supported versions of ClickHouse server can be found
here
. If you're unsure what version of ClickHouse server to use, read this discussion
here
. Our CI test matrix tests against the latest two LTS releases and the latest three stable releases. However, due to the HTTP protocol and minimal breaking changes between ClickHouse releases, ClickHouse Connect generally works well with server versions outside the officially supported range, though compatibility with certain advanced data types may vary.
Basic usage {#basic-usage}
Gather your connection details {#gather-your-connection-details}
Establish a connection {#establish-a-connection}
There are two examples shown for connecting to ClickHouse:
- Connecting to a ClickHouse server on localhost.
- Connecting to a ClickHouse Cloud service.
Use a ClickHouse Connect client instance to connect to a ClickHouse server on localhost: {#use-a-clickhouse-connect-client-instance-to-connect-to-a-clickhouse-server-on-localhost}
```python
import clickhouse_connect
client = clickhouse_connect.get_client(host='localhost', username='default', password='password')
```
Use a ClickHouse Connect client instance to connect to a ClickHouse Cloud service: {#use-a-clickhouse-connect-client-instance-to-connect-to-a-clickhouse-cloud-service}
:::tip
Use the connection details gathered earlier. ClickHouse Cloud services require TLS, so use port 8443.
:::
```python
import clickhouse_connect
client = clickhouse_connect.get_client(host='HOSTNAME.clickhouse.cloud', port=8443, username='default', password='your password')
```
Interact with your database {#interact-with-your-database}
To run a ClickHouse SQL command, use the client
command
method:
python
client.command('CREATE TABLE new_table (key UInt32, value String, metric Float64) ENGINE MergeTree ORDER BY key')
To insert batch data, use the client
insert
method with a two-dimensional array of rows and values:
python
row1 = [1000, 'String Value 1000', 5.233]
row2 = [2000, 'String Value 2000', -107.04]
data = [row1, row2]
client.insert('new_table', data, column_names=['key', 'value', 'metric'])
To retrieve data using ClickHouse SQL, use the client
query
method:
```python
result = client.query('SELECT max(key), avg(metric) FROM new_table')
print(result.result_rows)
Output: [(2000, -50.9035)]
``` | {"source_file": "index.md"} | [
-0.07224417477846146,
-0.02915593795478344,
-0.09631956368684769,
-0.01769159361720085,
-0.08081077039241791,
-0.013479194603860378,
-0.07111956179141998,
-0.10003433376550674,
-0.041751179844141006,
0.05661371722817421,
0.04255736991763115,
0.009828482754528522,
0.0022684773430228233,
0.0... |
13ec8a6e-a9bf-4849-8eda-f7651dbbf0e0 | sidebar_label: 'Advanced Usage'
sidebar_position: 6
keywords: ['clickhouse', 'python', 'advanced', 'raw', 'async', 'threading']
description: 'Advanced Usage with ClickHouse Connect'
slug: /integrations/language-clients/python/advanced-usage
title: 'Advanced Usage'
doc_type: 'reference'
Advanced Usage {#advanced-usage}
Raw API {#raw-api}
For use cases which do not require transformation between ClickHouse data and native or third party data types and structures, the ClickHouse Connect client provides methods for direct usage of the ClickHouse connection.
Client
raw_query
method {#client-rawquery-method}
The
Client.raw_query
method allows direct usage of the ClickHouse HTTP query interface using the client connection. The return value is an unprocessed
bytes
object. It offers a convenient wrapper with parameter binding, error handling, retries, and settings management using a minimal interface:
| Parameter | Type | Default | Description |
|---------------|------------------|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
| query | str |
Required
| Any valid ClickHouse query |
| parameters | dict or iterable |
None
| See
parameters description
. |
| settings | dict |
None
| See
settings description
. |
| fmt | str |
None
| ClickHouse Output Format for the resulting bytes. (ClickHouse uses TSV if not specified) |
| use_database | bool | True | Use the ClickHouse Connect client-assigned database for the query context |
| external_data | ExternalData |
None
| An ExternalData object containing file or binary data to use with the query. See
Advanced Queries (External Data)
|
It is the caller's responsibility to handle the resulting
bytes
object. Note that the
Client.query_arrow
is just a thin wrapper around this method using the ClickHouse
Arrow
output format.
Client
raw_stream
method {#client-rawstream-method}
The
Client.raw_stream
method has the same API as the
raw_query
method, but returns an
io.IOBase
object which can be used as a generator/stream source of
bytes
objects. It is currently utilized by the
query_arrow_stream
method. | {"source_file": "advanced-usage.md"} | [
-0.0790288969874382,
0.021263059228658676,
-0.07617399841547012,
0.05647560581564903,
-0.10374122112989426,
-0.06912051141262054,
0.017288926988840103,
0.03134090453386307,
-0.05986415967345238,
-0.045565683394670486,
0.020057549700140953,
0.002006473019719124,
0.0587804839015007,
-0.05603... |
2d847020-d2da-4384-9411-3c753d47c7ab | Client
raw_insert
method {#client-rawinsert-method}
The
Client.raw_insert
method allows direct inserts of
bytes
objects or
bytes
object generators using the client connection. Because it does no processing of the insert payload, it is highly performant. The method provides options to specify settings and insert format:
| Parameter | Type | Default | Description |
|--------------|----------------------------------------|------------|---------------------------------------------------------------------------------------------|
| table | str |
Required
| Either the simple or database qualified table name |
| column_names | Sequence[str] |
None
| Column names for the insert block. Required if the
fmt
parameter does not include names |
| insert_block | str, bytes, Generator[bytes], BinaryIO |
Required
| Data to insert. Strings will be encoded with the client encoding. |
| settings | dict |
None
| See
settings description
. |
| fmt | str |
None
| ClickHouse Input Format of the
insert_block
bytes. (ClickHouse uses TSV if not specified) |
It is the caller's responsibility to ensure that the
insert_block
is in the specified format and uses the specified compression method. ClickHouse Connect uses these raw inserts for file uploads and PyArrow Tables, delegating parsing to the ClickHouse server.
Saving query results as files {#saving-query-results-as-files}
You can stream files directly from ClickHouse to the local file system using the
raw_stream
method. For example, if you'd like to save the results of a query to a CSV file, you could use the following code snippet:
```python
import clickhouse_connect
if
name
== '
main
':
client = clickhouse_connect.get_client()
query = 'SELECT number, toString(number) AS number_as_str FROM system.numbers LIMIT 5'
fmt = 'CSVWithNames' # or CSV, or CSVWithNamesAndTypes, or TabSeparated, etc.
stream = client.raw_stream(query=query, fmt=fmt)
with open("output.csv", "wb") as f:
for chunk in stream:
f.write(chunk)
```
The code above yields an
output.csv
file with the following content:
csv
"number","number_as_str"
0,"0"
1,"1"
2,"2"
3,"3"
4,"4"
Similarly, you could save data in
TabSeparated
and other formats. See
Formats for Input and Output Data
for an overview of all available format options.
Multithreaded, multiprocess, and async/event driven use cases {#multithreaded-multiprocess-and-asyncevent-driven-use-cases} | {"source_file": "advanced-usage.md"} | [
-0.0030848870519548655,
-0.02159876562654972,
-0.09602592885494232,
0.04871038720011711,
-0.10706966370344162,
-0.043639130890369415,
-0.07533787935972214,
0.049879010766744614,
-0.02333103120326996,
0.04123279079794884,
0.020495140925049782,
-0.11593624204397202,
0.0990184098482132,
-0.05... |
7beb9f05-b0eb-4a43-84bc-2e24b4f447a8 | Multithreaded, multiprocess, and async/event driven use cases {#multithreaded-multiprocess-and-asyncevent-driven-use-cases}
ClickHouse Connect works well in multithreaded, multiprocess, and event-loop-driven/asynchronous applications. All query and insert processing occurs within a single thread, so operations are generally thread-safe. (Parallel processing of some operations at a low level is a possible future enhancement to overcome the performance penalty of a single thread, but even in that case thread safety will be maintained.)
Because each query or insert executed maintains state in its own
QueryContext
or
InsertContext
object, respectively, these helper objects are not thread-safe, and they should not be shared between multiple processing streams. See the additional discussion about context objects in the
QueryContexts
and
InsertContexts
sections.
Additionally, in an application that has two or more queries and/or inserts "in flight" at the same time, there are two further considerations to keep in mind. The first is the ClickHouse "session" associated with the query/insert, and the second is the HTTP connection pool used by ClickHouse Connect Client instances.
AsyncClient wrapper {#asyncclient-wrapper}
ClickHouse Connect provides an async wrapper over the regular
Client
, so that it is possible to use the client in an
asyncio
environment.
To get an instance of the
AsyncClient
, you can use the
get_async_client
factory function, which accepts the same parameters as the standard
get_client
:
```python
import asyncio
import clickhouse_connect
async def main():
client = await clickhouse_connect.get_async_client()
result = await client.query("SELECT name FROM system.databases LIMIT 1")
print(result.result_rows)
# Output:
# [('INFORMATION_SCHEMA',)]
asyncio.run(main())
```
AsyncClient
has the same methods with the same parameters as the standard
Client
, but they are coroutines when applicable. Internally, these methods from the
Client
that perform I/O operations are wrapped in a
run_in_executor
call.
Multithreaded performance will increase when using the
AsyncClient
wrapper, as the execution threads and the GIL will be released while waiting for I/O operations to complete.
Note: Unlike the regular
Client
, the
AsyncClient
enforces
autogenerate_session_id
to be
False
by default.
See also:
run_async example
.
Managing ClickHouse session IDs {#managing-clickhouse-session-ids}
Each ClickHouse query occurs within the context of a ClickHouse "session". Sessions are currently used for two purposes:
- To associate specific ClickHouse settings with multiple queries (see the
user settings
). The ClickHouse
SET
command is used to change the settings for the scope of a user session.
- To track
temporary tables. | {"source_file": "advanced-usage.md"} | [
-0.06758242100477219,
-0.07930152863264084,
-0.07919012755155563,
-0.03065270557999611,
-0.033364973962306976,
-0.006908191833645105,
0.05294224992394447,
-0.005938488990068436,
0.05861310660839081,
0.006337140686810017,
-0.015015555545687675,
0.002032087417319417,
0.02190590091049671,
-0.... |
10c3b242-0f3a-43cb-96df-4ea0e4575349 | By default, each query executed with a ClickHouse Connect
Client
instance uses that client's session ID.
SET
statements and temporary tables work as expected when using a single client. However, the ClickHouse server does not allow concurrent queries within the same session (the client will raise a
ProgrammingError
if attempted). For applications that execute concurrent queries, use one of the following patterns:
1. Create a separate
Client
instance for each thread/process/event handler that needs session isolation. This preserves per-client session state (temporary tables and
SET
values).
2. Use a unique
session_id
for each query via the
settings
argument when calling
query
,
command
, or
insert
, if you do not require shared session state.
3. Disable sessions on a shared client by setting
autogenerate_session_id=False
before creating the client (or pass it directly to
get_client
).
```python
from clickhouse_connect import common
import clickhouse_connect
common.set_setting('autogenerate_session_id', False) # This should always be set before creating a client
client = clickhouse_connect.get_client(host='somehost.com', user='dbuser', password=1234)
```
Alternatively, pass
autogenerate_session_id=False
directly to
get_client(...)
.
In this case ClickHouse Connect does not send a
session_id
; the server does not treat separate requests as belonging to the same session. Temporary tables and session-level settings will not persist across requests.
Customizing the HTTP connection pool {#customizing-the-http-connection-pool}
ClickHouse Connect uses
urllib3
connection pools to handle the underlying HTTP connection to the server. By default, all client instances share the same connection pool, which is sufficient for the majority of use cases. This default pool maintains up to 8 HTTP Keep Alive connections to each ClickHouse server used by the application.
For large multi-threaded applications, separate connection pools may be appropriate. Customized connection pools can be provided as the
pool_mgr
keyword argument to the main
clickhouse_connect.get_client
function:
```python
import clickhouse_connect
from clickhouse_connect.driver import httputil
big_pool_mgr = httputil.get_pool_manager(maxsize=16, num_pools=12)
client1 = clickhouse_connect.get_client(pool_mgr=big_pool_mgr)
client2 = clickhouse_connect.get_client(pool_mgr=big_pool_mgr)
```
As demonstrated by the above example, clients can share a pool manager, or a separate pool manager can be created for each client. For more details on the options available when creating a PoolManager, see the
urllib3
documentation
. | {"source_file": "advanced-usage.md"} | [
-0.02148885652422905,
-0.02963981404900551,
-0.047930099070072174,
0.038782812654972076,
-0.12681300938129425,
-0.08069679141044617,
0.07778769731521606,
-0.04024443030357361,
0.008426652289927006,
-0.00047325529158115387,
0.004278105683624744,
0.006158843170851469,
0.09063579887151718,
-0... |
81e58619-e54c-46bb-aba3-de8cfe417f65 | sidebar_label: 'Additional Options'
sidebar_position: 3
keywords: ['clickhouse', 'python', 'options', 'settings']
description: 'Additional Options for ClickHouse Connect'
slug: /integrations/language-clients/python/additional-options
title: 'Additional Options'
doc_type: 'reference'
Additional options {#additional-options}
ClickHouse Connect provides a number of additional options for advanced use cases.
Global settings {#global-settings}
There are a small number of settings that control ClickHouse Connect behavior globally. They are accessed from the top level
common
package:
```python
from clickhouse_connect import common
common.set_setting('autogenerate_session_id', False)
common.get_setting('invalid_setting_action')
'drop'
```
:::note
These common settings
autogenerate_session_id
,
product_name
, and
readonly
should
always
be modified before creating a client with the
clickhouse_connect.get_client
method. Changing these settings after client creation does not affect the behavior of existing clients.
:::
The following global settings are currently defined: | {"source_file": "additional-options.md"} | [
-0.043343812227249146,
-0.031381044536828995,
-0.024420883506536484,
0.012499569915235043,
-0.09391582012176514,
0.01678795926272869,
0.030289962887763977,
0.016319233924150467,
-0.09056442975997925,
-0.06441031396389008,
0.03938246890902519,
0.03797179460525513,
0.03639711067080498,
0.016... |
151e1ad7-bd64-4e69-942a-6156df6a9214 | | Setting Name | Default | Options | Description |
|-------------------------------------|---------|-------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| autogenerate_session_id | True | True, False | Autogenerate a new UUID(1) session ID (if not provided) for each client session. If no session ID is provided (either at the client or query level), ClickHouse will generate a random internal ID for each query. |
| dict_parameter_format | 'json' | 'json', 'map' | This controls whether parameterized queries convert a Python dictionary to JSON or ClickHouse Map syntax.
json
should be used for inserts into JSON columns,
map
for ClickHouse Map columns. |
| invalid_setting_action | 'error' | 'drop', 'send', 'error' | Action to take when an invalid or readonly setting is provided (either for the client session or query). If
drop
, the setting will be ignored, if
send
, the setting will be sent to ClickHouse, if
error
a client side ProgrammingError will be raised. |
| max_connection_age | 600 | | Maximum seconds that an HTTP Keep Alive connection will be kept open/reused. This prevents bunching of connections against a single ClickHouse node behind a load balancer/proxy. Defaults to 10 minutes. |
| product_name | | | A string that is passed with the query to ClickHouse for tracking the app using ClickHouse Connect. Should be in the form <product name;&gl/<product version>. |
| readonly | 0 | 0, 1 | Implied "read_only" ClickHouse settings for versions prior to 19.17. Can be set to match the ClickHouse "read_only" value for settings to allow operation with very old ClickHouse versions. |
| send_os_user | True | True, False | Include the detected operating system user in client information sent to ClickHouse (HTTP User-Agent string). | | {"source_file": "additional-options.md"} | [
0.00512098940089345,
0.024742376059293747,
-0.02560773305594921,
0.022052526473999023,
-0.08293841779232025,
0.044052015990018845,
0.037744488567113876,
0.04441968351602554,
0.011981790885329247,
-0.04087097942829132,
0.03831908479332924,
-0.06069084629416466,
-0.031243475154042244,
-0.050... |
07f987f6-5c38-488c-9261-aec1dd3f8d22 | | send_integration_tags | True | True, False | Include the used integration libraries/version (e.g. Pandas/SQLAlchemy/etc.) in client information sent to ClickHouse (HTTP User-Agent string). |
| use_protocol_version | True | True, False | Use the client protocol version. This is needed for
DateTime
timezone columns but breaks with the current version of chproxy. |
| max_error_size | 1024 | | Maximum number of characters that will be returned in a client error messages. Use 0 for this setting to get the full ClickHouse error message. Defaults to 1024 characters. |
| http_buffer_size | 10MB | | Size (in bytes) of the "in-memory" buffer used for HTTP streaming queries. |
| preserve_pandas_datetime_resolution | False | True, False | When True and using pandas 2.x, preserves the datetime64/timedelta64 dtype resolution (e.g., 's', 'ms', 'us', 'ns'). If False (or on pandas <2.x), coerces to nanosecond ('ns') resolution for compatibility. | | {"source_file": "additional-options.md"} | [
0.008695974946022034,
-0.03538668155670166,
-0.10483121871948242,
-0.013959748670458794,
0.020603274926543236,
-0.11012677103281021,
-0.04944688454270363,
0.005425015930086374,
-0.04909306392073631,
0.013780451379716396,
0.014728739857673645,
-0.02522171102464199,
-0.0008146211039274931,
-... |
0c017b7f-115a-4505-9444-fa21e84f4042 | Compression {#compression}
ClickHouse Connect supports lz4, zstd, brotli, and gzip compression for both query results and inserts. Always keep in mind that using compression usually involves a tradeoff between network bandwidth/transfer speed against CPU usage (both on the client and the server.)
To receive compressed data, the ClickHouse server
enable_http_compression
must be set to 1, or the user must have permission to change the setting on a "per query" basis.
Compression is controlled by the
compress
parameter when calling the
clickhouse_connect.get_client
factory method. By default,
compress
is set to
True
, which will trigger the default compression settings. For queries executed with the
query
,
query_np
, and
query_df
client methods, ClickHouse Connect will add the
Accept-Encoding
header with
the
lz4
,
zstd
,
br
(brotli, if the brotli library is installed),
gzip
, and
deflate
encodings to queries executed with the
query
client method (and indirectly,
query_np
and
query_df
). (For the majority of requests the ClickHouse
server will return with a
zstd
compressed payload.) For inserts, by default ClickHouse Connect will compress insert blocks with
lz4
compression, and send the
Content-Encoding: lz4
HTTP header.
The
get_client
compress
parameter can also be set to a specific compression method, one of
lz4
,
zstd
,
br
, or
gzip
. That method will then be used for both inserts and query results (if supported by the ClickHouse server.) The required
zstd
and
lz4
compression libraries are now installed by default with ClickHouse Connect. If
br
/brotli is specified, the brotli library must be installed separately.
Note that the
raw*
client methods don't use the compression specified by the client configuration.
We also recommend against using
gzip
compression, as it is significantly slower than the alternatives for both compressing and decompressing data.
HTTP proxy support {#http-proxy-support}
ClickHouse Connect adds basic HTTP proxy support using the
urllib3
library. It recognizes the standard
HTTP_PROXY
and
HTTPS_PROXY
environment variables. Note that using these environment variables will apply to any client created with the
clickhouse_connect.get_client
method. Alternatively, to configure per client, you can use the
http_proxy
or
https_proxy
arguments to the get_client method. For details on the implementation of HTTP Proxy support, see the
urllib3
documentation.
To use a SOCKS proxy, you can send a
urllib3
SOCKSProxyManager
as the
pool_mgr
argument to
get_client
. Note that this will require installing the PySocks library either directly or using the
[socks]
option for the
urllib3
dependency.
"Old" JSON data type {#old-json-data-type} | {"source_file": "additional-options.md"} | [
-0.08346162736415863,
0.02762824110686779,
-0.111094631254673,
0.04446728527545929,
-0.03362144157290459,
-0.07151705771684647,
-0.0460968092083931,
-0.06096932291984558,
-0.041610293090343475,
-0.013384299352765083,
-0.02414310723543167,
0.06569674611091614,
0.05655756965279579,
-0.018040... |
dbe28d6b-79d1-4138-943e-9b2288f49651 | "Old" JSON data type {#old-json-data-type}
The experimental
Object
(or
Object('json')
) data type is deprecated and should be avoided in a production environment. ClickHouse Connect continues to provide limited support for the data type for backward compatibility. Note that this support does not include queries that are expected to return "top level" or "parent" JSON values as dictionaries or the equivalent, and such queries will result in an exception.
"New" Variant/Dynamic/JSON datatypes (experimental feature) {#new-variantdynamicjson-datatypes-experimental-feature}
Beginning with the 0.8.0 release,
clickhouse-connect
provides experimental support for the new (also experimental) ClickHouse types Variant, Dynamic, and JSON.
Usage notes {#usage-notes}
JSON data can be inserted as either a Python dictionary or a JSON string containing a JSON object
{}
. Other forms of JSON data are not supported.
Queries using subcolumns/paths for these types will return the type of the sub column.
See the main ClickHouse
documentation
for other usage notes.
Known limitations {#known-limitations}
Each of these types must be enabled in the ClickHouse settings before using.
The "new" JSON type is available starting with the ClickHouse 24.8 release
Due to internal format changes,
clickhouse-connect
is only compatible with Variant types beginning with the ClickHouse 24.7 release
Returned JSON objects will only return the
max_dynamic_paths
number of elements (which defaults to 1024). This will be fixed in a future release.
Inserts into
Dynamic
columns will always be the String representation of the Python value. This will be fixed in a future release, once https://github.com/ClickHouse/ClickHouse/issues/70395 has been fixed.
The implementation for the new types has not been optimized in C code, so performance may be somewhat slower than for simpler, established data types. | {"source_file": "additional-options.md"} | [
-0.08235551416873932,
-0.03601653128862381,
0.009233089163899422,
0.04943123087286949,
-0.04756375029683113,
-0.06806392222642899,
-0.07488448172807693,
0.00762349646538496,
-0.05441093444824219,
-0.05218110606074333,
0.04874422401189804,
0.03624353185296059,
-0.05112626776099205,
0.083848... |
b9d8dff8-d152-4fd3-b997-cd0cce0ce174 | sidebar_label: 'Advanced Querying'
sidebar_position: 4
keywords: ['clickhouse', 'python', 'query', 'advanced']
description: 'Advanced Querying with ClickHouse Connect'
slug: /integrations/language-clients/python/advanced-querying
title: 'Advanced Querying'
doc_type: 'reference'
Querying data with ClickHouse Connect: Advanced usage {#querying-data-with-clickhouse-connect--advanced-usage}
QueryContexts {#querycontexts}
ClickHouse Connect executes standard queries within a
QueryContext
. The
QueryContext
contains the key structures that are used to build queries against the ClickHouse database, and the configuration used to process the result into a
QueryResult
or other response data structure. That includes the query itself, parameters, settings, read formats, and other properties.
A
QueryContext
can be acquired using the client
create_query_context
method. This method takes the same parameters as the core query method. This query context can then be passed to the
query
,
query_df
, or
query_np
methods as the
context
keyword argument instead of any or all of the other arguments to those methods. Note that additional arguments specified for the method call will override any properties of QueryContext.
The clearest use case for a
QueryContext
is to send the same query with different binding parameter values. All parameter values can be updated by calling the
QueryContext.set_parameters
method with a dictionary, or any single value can be updated by calling
QueryContext.set_parameter
with the desired
key
,
value
pair.
python
client.create_query_context(query='SELECT value1, value2 FROM data_table WHERE key = {k:Int32}',
parameters={'k': 2},
column_oriented=True)
result = client.query(context=qc)
assert result.result_set[1][0] == 'second_value2'
qc.set_parameter('k', 1)
result = test_client.query(context=qc)
assert result.result_set[1][0] == 'first_value2'
Note that
QueryContext
s are not thread safe, but a copy can be obtained in a multi-threaded environment by calling the
QueryContext.updated_copy
method.
Streaming queries {#streaming-queries}
The ClickHouse Connect Client provides multiple methods for retrieving data as a stream (implemented as a Python generator):
query_column_block_stream
-- Returns query data in blocks as a sequence of columns using native Python objects
query_row_block_stream
-- Returns query data as a block of rows using native Python objects
query_rows_stream
-- Returns query data as a sequence of rows using native Python objects
query_np_stream
-- Returns each ClickHouse block of query data as a NumPy array
query_df_stream
-- Returns each ClickHouse Block of query data as a Pandas DataFrame
query_arrow_stream
-- Returns query data in PyArrow RecordBlocks | {"source_file": "advanced-querying.md"} | [
-0.03420699015259743,
-0.0063519724644720554,
-0.06698118895292282,
0.09277931600809097,
-0.07828646898269653,
-0.012907953932881355,
0.0734371617436409,
0.027075422927737236,
-0.09830019623041153,
-0.038931362330913544,
0.012557991780340672,
-0.0035557993687689304,
0.09741253405809402,
-0... |
9dc2c8e5-9473-4426-a2f2-955514ff82ac | query_df_stream
-- Returns each ClickHouse Block of query data as a Pandas DataFrame
query_arrow_stream
-- Returns query data in PyArrow RecordBlocks
query_df_arrow_stream
-- Returns each ClickHouse Block of query data as an arrow-backed Pandas DataFrame or a Polars DataFrame depending on the kwarg
dataframe_library
(default is "pandas").
Each of these methods returns a
ContextStream
object that must be opened via a
with
statement to start consuming the stream.
Data blocks {#data-blocks}
ClickHouse Connect processes all data from the primary
query
method as a stream of blocks received from the ClickHouse server. These blocks are transmitted in the custom "Native" format to and from ClickHouse. A "block" is simply a sequence of columns of binary data, where each column contains an equal number of data values of the specified data type. (As a columnar database, ClickHouse stores this data in a similar form.) The size of a block returned from a query is governed by two user settings that can be set at several levels (user profile, user, session, or query). They are:
max_block_size
-- Limit on the size of the block in rows. Default 65536.
preferred_block_size_bytes
-- Soft limit on the size of the block in bytes. Default 1,000,0000.
Regardless of the
preferred_block_size_setting
, each block will never be more than
max_block_size
rows. Depending on the type of query, the actual blocks returned can be of any size. For example, queries to a distributed table covering many shards may contain smaller blocks retrieved directly from each shard.
When using one of the Client
query_*_stream
methods, results are returned on a block by block basis. ClickHouse Connect only loads a single block at a time. This allows processing large amounts of data without the need to load all of a large result set into memory. Note the application should be prepared to process any number of blocks and the exact size of each block cannot be controlled.
HTTP data buffer for slow processing {#http-data-buffer-for-slow-processing}
Because of limitations in the HTTP protocol, if blocks are processed at a rate significantly slower than the ClickHouse server is streaming data, the ClickHouse server will close the connection, resulting in an Exception being thrown in the processing thread. Some of this can be mitigated by increasing the buffer size of the HTTP streaming buffer (which defaults to 10 megabytes) using the common
http_buffer_size
setting. Large
http_buffer_size
values should be okay in this situation if there is sufficient memory available to the application. Data in the buffer is stored compressed if using
lz4
or
zstd
compression, so using those compression types will increase the overall buffer available.
StreamContexts {#streamcontexts} | {"source_file": "advanced-querying.md"} | [
-0.040925219655036926,
-0.06749681383371353,
-0.07687211036682129,
-0.0002618304279167205,
-0.037281304597854614,
-0.08364321291446686,
0.017656106501817703,
-0.019129371270537376,
0.016263162717223167,
-0.0298442505300045,
-0.013895563781261444,
0.036008674651384354,
-0.07116661965847015,
... |
5bdb5499-83e4-447e-b23b-dab2ac896295 | StreamContexts {#streamcontexts}
Each of the
query_*_stream
methods (like
query_row_block_stream
) returns a ClickHouse
StreamContext
object, which is a combined Python context/generator. This is the basic usage:
python
with client.query_row_block_stream('SELECT pickup, dropoff, pickup_longitude, pickup_latitude FROM taxi_trips') as stream:
for block in stream:
for row in block:
<do something with each row of Python trip data>
Note that trying to use a StreamContext without a
with
statement will raise an error. The use of a Python context ensures that the stream (in this case, a streaming HTTP response) will be properly closed even if not all the data is consumed and/or an exception is raised during processing. Also,
StreamContext
s can only be used once to consume the stream. Trying to use a
StreamContext
after it has exited will produce a
StreamClosedError
.
You can use the
source
property of the
StreamContext
to access the parent
QueryResult
object, which includes column names and types.
Stream types {#stream-types}
The
query_column_block_stream
method returns the block as a sequence of column data stored as native Python data types. Using the above
taxi_trips
queries, the data returned will be a list where each element of the list is another list (or tuple) containing all the data for the associated column. So
block[0]
would be a tuple containing nothing but strings. Column oriented formats are most used for doing aggregate operations for all the values in a column, like adding up total fares.
The
query_row_block_stream
method returns the block as a sequence of rows like a traditional relational database. For taxi trips, the data returned will be a list where each element of the list is another list representing a row of data. So
block[0]
would contain all the fields (in order) for the first taxi trip ,
block[1]
would contain a row for all the fields in the second taxi trip, and so on. Row oriented results are normally used for display or transformation processes.
The
query_row_stream
is a convenience method that automatically moves to the next block when iterating through the stream. Otherwise, it is identical to
query_row_block_stream
.
The
query_np_stream
method return each block as a two-dimensional NumPy Array. Internally, NumPy arrays are (usually) stored as columns, so no distinct row or column methods are needed. The "shape" of the NumPy array will be expressed as (columns, rows). The NumPy library provides many methods of manipulating NumPy arrays. Note that if all columns in the query share the same NumPy dtype, the returned NumPy array will only have one dtype as well, and can be reshaped/rotated without actually changing its internal structure. | {"source_file": "advanced-querying.md"} | [
-0.05762893334031105,
-0.020437024533748627,
-0.06059806048870087,
0.021617908030748367,
0.02289564721286297,
-0.05449072644114494,
0.11425264179706573,
0.03362710401415825,
-0.02171357534825802,
-0.03396976739168167,
-0.009347442537546158,
-0.04390552639961243,
0.020697440952062607,
0.029... |
3f8d9050-3011-4d48-9aa9-b5c58aa1bedb | The
query_df_stream
method returns each ClickHouse Block as a two-dimensional Pandas DataFrame. Here's an example which shows that the
StreamContext
object can be used as a context in a deferred fashion (but only once).
python
df_stream = client.query_df_stream('SELECT * FROM hits')
column_names = df_stream.source.column_names
with df_stream:
for df in df_stream:
<do something with the pandas DataFrame>
The
query_df_arrow_stream
method returns each ClickHouse Block as a DataFrame with PyArrow dtype backend. This method supports both Pandas (2.x or later) and Polars DataFrames via the
dataframe_library
parameter (defaults to
"pandas"
). Each iteration yields a DataFrame converted from PyArrow record batches, providing better performance and memory efficiency for certain data types.
Finally, the
query_arrow_stream
method returns a ClickHouse
ArrowStream
formatted result as a
pyarrow.ipc.RecordBatchStreamReader
wrapped in
StreamContext
. Each iteration of the stream returns PyArrow RecordBlock.
Streaming examples {#streaming-examples}
Stream rows {#stream-rows}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Stream large result sets row by row
with client.query_rows_stream("SELECT number, number * 2 as doubled FROM system.numbers LIMIT 100000") as stream:
for row in stream:
print(row) # Process each row
# Output:
# (0, 0)
# (1, 2)
# (2, 4)
# ....
```
Stream row blocks {#stream-row-blocks}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Stream in blocks of rows (more efficient than row-by-row)
with client.query_row_block_stream("SELECT number, number * 2 FROM system.numbers LIMIT 100000") as stream:
for block in stream:
print(f"Received block with {len(block)} rows")
# Output:
# Received block with 65409 rows
# Received block with 34591 rows
```
Stream Pandas DataFrames {#stream-pandas-dataframes}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Stream query results as Pandas DataFrames
with client.query_df_stream("SELECT number, toString(number) AS str FROM system.numbers LIMIT 100000") as stream:
for df in stream:
# Process each DataFrame block
print(f"Received DataFrame with {len(df)} rows")
print(df.head(3))
# Output:
# Received DataFrame with 65409 rows
# number str
# 0 0 0
# 1 1 1
# 2 2 2
# Received DataFrame with 34591 rows
# number str
# 0 65409 65409
# 1 65410 65410
# 2 65411 65411
```
Stream Arrow batches {#stream-arrow-batches}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Stream query results as Arrow record batches | {"source_file": "advanced-querying.md"} | [
-0.04451907426118851,
-0.12177751958370209,
-0.09945408999919891,
0.012041962705552578,
0.039067067205905914,
-0.07401210814714432,
0.04294794052839279,
-0.04343847557902336,
-0.015213774517178535,
-0.01597798429429531,
0.03578934818506241,
0.01924102008342743,
-0.03788934275507927,
0.0056... |
e22b723f-2611-4997-bbfe-5aede0eb0f6a | Stream Arrow batches {#stream-arrow-batches}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Stream query results as Arrow record batches
with client.query_arrow_stream("SELECT * FROM large_table") as stream:
for arrow_batch in stream:
# Process each Arrow batch
print(f"Received Arrow batch with {arrow_batch.num_rows} rows")
# Output:
# Received Arrow batch with 65409 rows
# Received Arrow batch with 34591 rows
```
NumPy, Pandas, and Arrow queries {#numpy-pandas-and-arrow-queries}
ClickHouse Connect provides specialized query methods for working with NumPy, Pandas, and Arrow data structures. These methods allow you to retrieve query results directly in these popular data formats without manual conversion.
NumPy queries {#numpy-queries}
The
query_np
method returns query results as a NumPy array instead of a ClickHouse Connect
QueryResult
.
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Query returns a NumPy array
np_array = client.query_np("SELECT number, number * 2 AS doubled FROM system.numbers LIMIT 5")
print(type(np_array))
Output:
print(np_array)
Output:
[[0 0]
[1 2]
[2 4]
[3 6]
[4 8]]
```
Pandas queries {#pandas-queries}
The
query_df
method returns query results as a Pandas DataFrame instead of a ClickHouse Connect
QueryResult
.
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Query returns a Pandas DataFrame
df = client.query_df("SELECT number, number * 2 AS doubled FROM system.numbers LIMIT 5")
print(type(df))
Output:
print(df)
Output:
number doubled
0 0 0
1 1 2
2 2 4
3 3 6
4 4 8
```
PyArrow queries {#pyarrow-queries}
The
query_arrow
method returns query results as a PyArrow Table. It utilizes the ClickHouse
Arrow
format directly, so it only accepts three arguments in common with the main
query
method:
query
,
parameters
, and
settings
. In addition, there is an additional argument,
use_strings
, which determines whether the Arrow Table will render ClickHouse String types as strings (if True) or bytes (if False).
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Query returns a PyArrow Table
arrow_table = client.query_arrow("SELECT number, toString(number) AS str FROM system.numbers LIMIT 3")
print(type(arrow_table))
Output:
print(arrow_table)
Output:
pyarrow.Table
number: uint64 not null
str: string not null
----
number: [[0,1,2]]
str: [["0","1","2"]]
```
Arrow-backed DataFrames {#arrow-backed-dataframes}
ClickHouse Connect supports fast, memory‑efficient DataFrame creation from Arrow results via the
query_df_arrow
and
query_df_arrow_stream
methods. These are thin wrappers around the Arrow query methods and perform zero‑copy conversions to DataFrames where possible: | {"source_file": "advanced-querying.md"} | [
0.016246989369392395,
-0.07783886045217514,
-0.05890851095318794,
0.04032987728714943,
-0.07071764022111893,
-0.0849122479557991,
0.016534212976694107,
-0.03778639808297157,
-0.06638097018003464,
-0.03414793312549591,
-0.0026536290533840656,
0.05960845947265625,
-0.021640317514538765,
-0.0... |
021469ff-0882-4263-8bea-60d81c4ea894 | query_df_arrow
: Executes the query using the ClickHouse
Arrow
output format and returns a DataFrame.
For
dataframe_library='pandas'
, returns a pandas 2.x DataFrame using Arrow‑backed dtypes (
pd.ArrowDtype
). This requires pandas 2.x and leverages zero‑copy buffers where possible for excellent performance and low memory overhead.
For
dataframe_library='polars'
, returns a Polars DataFrame created from the Arrow table (
pl.from_arrow
), which is similarly efficient and can be zero‑copy depending on the data.
query_df_arrow_stream
: Streams results as a sequence of DataFrames (pandas 2.x or Polars) converted from Arrow stream batches.
Query to Arrow-backed DataFrame {#query-to-arrow-backed-dataframe}
```python
import clickhouse_connect
client = clickhouse_connect.get_client()
Query returns a Pandas DataFrame with Arrow dtypes (requires pandas 2.x)
df = client.query_df_arrow(
"SELECT number, toString(number) AS str FROM system.numbers LIMIT 3",
dataframe_library="pandas"
)
print(df.dtypes)
Output:
number uint64[pyarrow]
str string[pyarrow]
dtype: object
Or use Polars
polars_df = client.query_df_arrow(
"SELECT number, toString(number) AS str FROM system.numbers LIMIT 3",
dataframe_library="polars"
)
print(df.dtypes)
Output:
[UInt64, String]
Streaming into batches of DataFrames (polars shown)
with client.query_df_arrow_stream(
"SELECT number, toString(number) AS str FROM system.numbers LIMIT 100000", dataframe_library="polars"
) as stream:
for df_batch in stream:
print(f"Received {type(df_batch)} batch with {len(df_batch)} rows and dtypes: {df_batch.dtypes}")
# Output:
# Received
batch with 65409 rows and dtypes: [UInt64, String]
# Received
batch with 34591 rows and dtypes: [UInt64, String]
```
Notes and caveats {#notes-and-caveats}
Arrow type mapping: When returning data in Arrow format, ClickHouse maps types to the closest supported Arrow types. Some ClickHouse types do not have a native Arrow equivalent and are returned as raw bytes in Arrow fields (usually
BINARY
or
FIXED_SIZE_BINARY
).
Examples:
IPv4
is represented as Arrow
UINT32
;
IPv6
and large integers (
Int128/UInt128/Int256/UInt256
) are often represented as
FIXED_SIZE_BINARY
/
BINARY
with raw bytes.
In these cases, the DataFrame column will contain byte values backed by the Arrow field; it is up to the client code to interpret/convert those bytes according to ClickHouse semantics.
Unsupported Arrow data types (e.g., UUID/ENUM as true Arrow types) are not emitted; values are represented using the closest supported Arrow type (often as binary bytes) for output.
Pandas requirement: Arrow‑backed dtypes require pandas 2.x. For older pandas versions, use
query_df
(non‑Arrow) instead. | {"source_file": "advanced-querying.md"} | [
-0.002196437446400523,
-0.13610076904296875,
-0.07113856822252274,
0.0046742637641727924,
0.01335903350263834,
-0.09852936863899231,
0.06695981323719025,
-0.03388306871056557,
0.021795066073536873,
-0.042440325021743774,
0.029192809015512466,
0.08681917190551758,
-0.06156322732567787,
-0.0... |
f23cfb21-7bfb-4af9-a12a-05b77eb9c15b | Pandas requirement: Arrow‑backed dtypes require pandas 2.x. For older pandas versions, use
query_df
(non‑Arrow) instead.
Strings vs binary: The
use_strings
option (when supported by the server setting
output_format_arrow_string_as_string
) controls whether ClickHouse
String
columns are returned as Arrow strings or as binary.
Mismatched ClickHouse/Arrow type conversion examples {#mismatched-clickhousearrow-type-conversion-examples}
When ClickHouse returns columns as raw binary data (e.g.,
FIXED_SIZE_BINARY
or
BINARY
), it is the responsibility of application code to convert these bytes to appropriate Python types. The examples below illustrate that some conversions are feasible using DataFrame library APIs, while others may require pure Python approaches like
struct.unpack
(which sacrifice performance but maintain flexibility).
Date
columns can arrive as
UINT16
(days since the Unix epoch, 1970‑01‑01). Converting inside the DataFrame is efficient and straightforward:
```python
Polars
df = df.with_columns(pl.col("event_date").cast(pl.Date))
Pandas
df["event_date"] = pd.to_datetime(df["event_date"], unit="D")
```
Columns like
Int128
can arrive as
FIXED_SIZE_BINARY
with raw bytes. Polars provides native support for 128-bit integers:
```python
Polars - native support
df = df.with_columns(pl.col("data").bin.reinterpret(dtype=pl.Int128, endianness="little"))
```
As of NumPy 2.3 there is no public 128-bit integer dtype, so we must fall back to pure Python and can do something like:
```python
Assuming we have a pandas dataframe with an Int128 column of dtype fixed_size_binary[16][pyarrow]
print(df)
Output:
str_col int_128_col
0 num1 b'\x15}\xda\xeb\x18ZU\x0fn\x05\x01\x00\x00\x00...
1 num2 b'\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...
2 num3 b'\x15\xdfp\x81r\x9f\x01\x00\x00\x00\x00\x00\x...
print([int.from_bytes(n, byteorder="little") for n in df["int_128_col"].to_list()])
Output:
[1234567898765432123456789, 8, 456789123456789]
```
The key takeaway: application code must handle these conversions based on the capabilities of the chosen DataFrame library and the acceptable performance trade-offs. When DataFrame-native conversions aren't available, pure Python approaches remain an option.
Read formats {#read-formats}
Read formats control the data types of values returned from the client
query
,
query_np
, and
query_df
methods. (The
raw_query
and
query_arrow
do not modify incoming data from ClickHouse, so format control does not apply.) For example, if the read format for a UUID is changed from the default
native
format to the alternative
string
format, a ClickHouse query of
UUID
column will be returned as string values (using the standard 8-4-4-4-12 RFC 1422 format) instead of Python UUID objects.
The "data type" argument for any formatting function can include wildcards. The format is a single lower case string. | {"source_file": "advanced-querying.md"} | [
0.05888662859797478,
-0.07545249164104462,
-0.03990842029452324,
0.0075706844218075275,
0.004190060775727034,
-0.09353094547986984,
-0.010540666989982128,
-0.02353546768426895,
-0.050024278461933136,
-0.02057546190917492,
-0.010063592344522476,
0.0007049049017950892,
-0.05646442621946335,
... |
7f43780e-27d1-4b65-ae56-1832a71d7764 | The "data type" argument for any formatting function can include wildcards. The format is a single lower case string.
Read formats can be set at several levels:
Globally, using the methods defined in the
clickhouse_connect.datatypes.format
package. This will control the format of the configured datatype for all queries.
```python
from clickhouse_connect.datatypes.format import set_read_format
Return both IPv6 and IPv4 values as strings
set_read_format('IPv*', 'string')
Return all Date types as the underlying epoch second or epoch day
set_read_format('Date*', 'int')
- For an entire query, using the optional `query_formats` dictionary argument. In that case any column (or subcolumn) of the specified data types(s) will use the configured format.
python
Return any UUID column as a string
client.query('SELECT user_id, user_uuid, device_uuid from users', query_formats={'UUID': 'string'})
- For the values in a specific column, using the optional `column_formats` dictionary argument. The key is the column named as return by ClickHouse, and format for the data column or a second level "format" dictionary of a ClickHouse type name and a value of query formats. This secondary dictionary can be used for nested column types such as Tuples or Maps.
python
Return IPv6 values in the
dev_address
column as strings
client.query('SELECT device_id, dev_address, gw_address from devices', column_formats={'dev_address':'string'})
```
Read format options (Python types) {#read-format-options-python-types} | {"source_file": "advanced-querying.md"} | [
0.06634233891963959,
0.04120934009552002,
-0.028993459418416023,
0.012540658004581928,
-0.09748023748397827,
-0.020201534032821655,
0.021809527650475502,
0.019234919920563698,
-0.07075335830450058,
-0.062157418578863144,
-0.038535743951797485,
0.029530955478549004,
-0.024133315309882164,
-... |
ebc7df3d-30dc-4317-a0d6-9faef889b2ba | | ClickHouse Type | Native Python Type | Read Formats | Comments |
|-----------------------|-------------------------|-------------------|-------------------------------------------------------------------------------------------------------------------|
| Int[8-64], UInt[8-32] | int | - | |
| UInt64 | int | signed | Superset does not currently handle large unsigned UInt64 values |
| [U]Int[128,256] | int | string | Pandas and NumPy int values are 64 bits maximum, so these can be returned as strings |
| BFloat16 | float | - | All Python floats are 64 bits internally |
| Float32 | float | - | All Python floats are 64 bits internally |
| Float64 | float | - | |
| Decimal | decimal.Decimal | - | |
| String | string | bytes | ClickHouse String columns have no inherent encoding, so they are also used for variable length binary data |
| FixedString | bytes | string | FixedStrings are fixed size byte arrays, but sometimes are treated as Python strings |
| Enum[8,16] | string | string, int | Python enums don't accept empty strings, so all enums are rendered as either strings or the underlying int value. |
| Date | datetime.date | int | ClickHouse stores Dates as days since 01/01/1970. This value is available as an int |
| Date32 | datetime.date | int | Same as Date, but for a wider range of dates |
| DateTime | datetime.datetime | int | ClickHouse stores DateTime in epoch seconds. This value is available as an int | | {"source_file": "advanced-querying.md"} | [
-0.04303152859210968,
-0.034773364663124084,
-0.10458605736494064,
-0.01915709115564823,
-0.05319785699248314,
-0.10376320034265518,
0.02233554609119892,
0.040548134595155716,
-0.10454867780208588,
-0.061451543122529984,
0.029945632442831993,
-0.0048698303289711475,
0.021176155656576157,
-... |
e9db6adf-6710-46f7-b9f1-1c0526e53f31 | | DateTime | datetime.datetime | int | ClickHouse stores DateTime in epoch seconds. This value is available as an int |
| DateTime64 | datetime.datetime | int | Python datetime.datetime is limited to microsecond precision. The raw 64 bit int value is available |
| Time | datetime.timedelta | int, string, time | The point in time is saved as a Unix timestamp. This value is available as an int |
| Time64 | datetime.timedelta | int, string, time | Python datetime.timedelta is limited to microsecond precision. The raw 64 bit int value is available |
| IPv4 |
ipaddress.IPv4Address
| string | IP addresses can be read as strings and properly formatted strings can be inserted as IP addresses |
| IPv6 |
ipaddress.IPv6Address
| string | IP addresses can be read as strings and properly formatted can be inserted as IP addresses |
| Tuple | dict or tuple | tuple, json | Named tuples returned as dictionaries by default. Named tuples can also be returned as JSON strings |
| Map | dict | - | |
| Nested | Sequence[dict] | - | |
| UUID | uuid.UUID | string | UUIDs can be read as strings formatted as per RFC 4122
|
| JSON | dict | string | A python dictionary is returned by default. The
string
format will return a JSON string |
| Variant | object | - | Returns the matching Python type for the ClickHouse datatype stored for the value |
| Dynamic | object | - | Returns the matching Python type for the ClickHouse datatype stored for the value | | {"source_file": "advanced-querying.md"} | [
-0.007767942734062672,
0.02550647035241127,
-0.047461796551942825,
-0.030271681025624275,
-0.039595361799001694,
-0.05911635607481003,
-0.09453155100345612,
0.010296342894434929,
0.0006713197217322886,
-0.024978099390864372,
0.013884627260267735,
-0.030260473489761353,
-0.044932764023542404,... |
e3d8dd83-5e30-4996-950c-73ab845d4946 | External data {#external-data}
ClickHouse queries can accept external data in any ClickHouse format. This binary data is sent along with the query string to be used to process the data. Details of the External Data feature are
here
. The client
query*
methods accept an optional
external_data
parameter to take advantage of this feature. The value for the
external_data
parameter should be a
clickhouse_connect.driver.external.ExternalData
object. The constructor for that object accepts the following arguments:
| Name | Type | Description |
|-----------|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
| file_path | str | Path to a file on the local system path to read the external data from. Either
file_path
or
data
is required |
| file_name | str | The name of the external data "file". If not provided, will be determined from the
file_path
(without extensions) |
| data | bytes | The external data in binary form (instead of being read from a file). Either
data
or
file_path
is required |
| fmt | str | The ClickHouse
Input Format
of the data. Defaults to
TSV
|
| types | str or seq of str | A list of column data types in the external data. If a string, types should be separated by commas. Either
types
or
structure
is required |
| structure | str or seq of str | A list of column name + data type in the data (see examples). Either
structure
or
types
is required |
| mime_type | str | Optional MIME type of the file data. Currently ClickHouse ignores this HTTP subheader |
To send a query with an external CSV file containing "movie" data, and combine that data with an
directors
table already present on the ClickHouse server:
```python
import clickhouse_connect
from clickhouse_connect.driver.external import ExternalData
client = clickhouse_connect.get_client()
ext_data = ExternalData(file_path='/data/movies.csv',
fmt='CSV',
structure=['movie String', 'year UInt16', 'rating Decimal32(3)', 'director String'])
result = client.query('SELECT name, avg(rating) FROM directors INNER JOIN movies ON directors.name = movies.director GROUP BY directors.name',
external_data=ext_data).result_rows
``` | {"source_file": "advanced-querying.md"} | [
-0.013826531358063221,
-0.009079087525606155,
-0.0937226340174675,
0.06508243829011917,
-0.07145816087722778,
-0.069285549223423,
0.028508273884654045,
0.020786602050065994,
-0.049941737204790115,
-0.05383332446217537,
-0.012500250712037086,
-0.047662027180194855,
0.04752881079912186,
-0.0... |
47383d93-50e3-4d57-98e9-20ce93dba87e | Additional external data files can be added to the initial
ExternalData
object using the
add_file
method, which takes the same parameters as the constructor. For HTTP, all external data is transmitted as part of a
multi-part/form-data
file upload.
Time zones {#time-zones}
There are multiple mechanisms for applying a time zone to ClickHouse DateTime and DateTime64 values. Internally, the ClickHouse server always stores any DateTime or
DateTime64
object as a time zone naive number representing seconds since the epoch, 1970-01-01 00:00:00 UTC time. For
DateTime64
values, the representation can be milliseconds, microseconds, or nanoseconds since the epoch, depending on precision. As a result, the application of any time zone information always occurs on the client side. Note that this involves meaningful extra calculation, so in performance critical applications it is recommended to treat DateTime types as epoch timestamps except for user display and conversion (Pandas Timestamps, for example, are always a 64-bit integer representing epoch nanoseconds to improve performance).
When using time zone aware data types in queries - in particular the Python
datetime.datetime
object --
clickhouse-connect
applies a client side time zone using the following precedence rules:
If the query method parameter
client_tzs
is specified for the query, the specific column time zone is applied
If the ClickHouse column has timezone metadata (i.e., it is a type like DateTime64(3, 'America/Denver')), the ClickHouse column timezone is applied. (Note this timezone metadata is not available to clickhouse-connect for DateTime columns prior to ClickHouse version 23.2)
If the query method parameter
query_tz
is specified for the query, the "query timezone" is applied.
If a timezone setting is applied to the query or session, that timezone is applied. (This functionality is not yet released in the ClickHouse server)
Finally, if the client
apply_server_timezone
parameter has been set to True (the default), the ClickHouse server timezone is applied.
Note that if the applied timezone based on these rules is UTC,
clickhouse-connect
will
always
return a time zone naive Python
datetime.datetime
object. Additional timezone information can then be added to this timezone naive object by the application code if desired. | {"source_file": "advanced-querying.md"} | [
-0.03061617724597454,
-0.014552352949976921,
-0.07368521392345428,
0.03197844699025154,
0.01935749500989914,
-0.11330991983413696,
-0.057273734360933304,
0.03423311933875084,
0.04914253205060959,
0.01923842914402485,
-0.016463903710246086,
0.009658963419497013,
-0.03934330865740776,
-0.018... |
629bd7ee-c06f-4ab3-8345-02f822ee4e7b | title: 'Java'
keywords: ['clickhouse', 'java', 'jdbc', 'client', 'integrate', 'r2dbc']
description: 'Options for connecting to ClickHouse from Java'
slug: /integrations/java
doc_type: 'reference'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CodeBlock from '@theme/CodeBlock';
Java clients overview
Client 0.8+
JDBC 0.8+
R2DBC Driver
ClickHouse client {#clickhouse-client}
Java client is a library implementing own API that abstracts details of network communications with ClickHouse server. Currently HTTP Interface is supported only. The library provide utilities to work with different ClickHouse formats and other related functions.
Java Client was developed far back in 2015. Its codebase became very hard to maintain, API is confusing, it is hard to optimize it further. So we have refactored it in 2024 into a new component
client-v2
. It has clear API, lighter codebase and more performance improvements, better ClickHouse formats support (RowBinary & Native mainly). JDBC will use this client in near feature.
Supported data types {#supported-data-types} | {"source_file": "index.md"} | [
-0.054154831916093826,
-0.0590839609503746,
-0.06429921090602875,
-0.035983163863420486,
-0.06359285116195679,
0.012729104608297348,
-0.006892441771924496,
0.03584239259362221,
-0.1027914360165596,
-0.05850761756300926,
-0.06570799648761749,
0.02237480692565441,
0.006762895733118057,
0.043... |
21d14326-962b-4394-83e0-c33dadc4ebb3 | |
Data Type
|
Client V2 Support
|
Client V1 Support
|
|-----------------------|---------------------|---------------------|
|Int8 |✔ |✔ |
|Int16 |✔ |✔ |
|Int32 |✔ |✔ |
|Int64 |✔ |✔ |
|Int128 |✔ |✔ |
|Int256 |✔ |✔ |
|UInt8 |✔ |✔ |
|UInt16 |✔ |✔ |
|UInt32 |✔ |✔ |
|UInt64 |✔ |✔ |
|UInt128 |✔ |✔ |
|UInt256 |✔ |✔ |
|Float32 |✔ |✔ |
|Float64 |✔ |✔ |
|Decimal |✔ |✔ |
|Decimal32 |✔ |✔ |
|Decimal64 |✔ |✔ |
|Decimal128 |✔ |✔ |
|Decimal256 |✔ |✔ |
|Bool |✔ |✔ |
|String |✔ |✔ |
|FixedString |✔ |✔ |
|Nullable |✔ |✔ |
|Date |✔ |✔ |
|Date32 |✔ |✔ |
|DateTime |✔ |✔ |
|DateTime32 |✔ |✔ |
|DateTime64 |✔ |✔ |
|Interval |✗ |✗ |
|Enum |✔ |✔ |
|Enum8 |✔ |✔ |
|Enum16 |✔ |✔ |
|Array |✔ |✔ |
|Map |✔ |✔ |
|Nested |✔ |✔ |
|Tuple |✔ |✔ |
|UUID |✔ |✔ |
|IPv4 |✔ |✔ |
|IPv6 |✔ |✔ |
|Object |✗ |✔ | | {"source_file": "index.md"} | [
0.009479984641075134,
-0.0001681388239376247,
-0.1254422664642334,
-0.02417411468923092,
-0.032876864075660706,
-0.019413119181990623,
0.00458165630698204,
0.062283728271722794,
-0.03685428202152252,
-0.021411431953310966,
0.03273586928844452,
-0.11359595507383347,
0.07042043656110764,
-0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.