id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
003a3435-5f4f-411d-b532-94860d6d218e | tag_list
- a rule for tagged metrics, a simple DSL for easier metric description in graphite format
someName;tag1=value1;tag2=value2
,
someName
, or
tag1=value1;tag2=value2
. The field
regexp
is translated into a
tagged
rule. The sorting by tags' names is unnecessary, ti will be done automatically. A tag's value (but not a name) can be set as a regular expression, e.g.
env=(dev|staging)
.
regexp
β A pattern for the metric name (a regular or DSL).
age
β The minimum age of the data in seconds.
precision
β How precisely to define the age of the data in seconds. Should be a divisor for 86400 (seconds in a day).
function
β The name of the aggregating function to apply to data whose age falls within the range
[age, age + precision]
. Accepted functions: min / max / any / avg. The average is calculated imprecisely, like the average of the averages.
Configuration Example without rules types {#configuration-example}
xml
<graphite_rollup>
<version_column_name>Version</version_column_name>
<pattern>
<regexp>click_cost</regexp>
<function>any</function>
<retention>
<age>0</age>
<precision>5</precision>
</retention>
<retention>
<age>86400</age>
<precision>60</precision>
</retention>
</pattern>
<default>
<function>max</function>
<retention>
<age>0</age>
<precision>60</precision>
</retention>
<retention>
<age>3600</age>
<precision>300</precision>
</retention>
<retention>
<age>86400</age>
<precision>3600</precision>
</retention>
</default>
</graphite_rollup>
Configuration Example with rules types {#configuration-typed-example} | {"source_file": "graphitemergetree.md"} | [
-0.025778114795684814,
0.09518974274396896,
-0.025023430585861206,
-0.0334622785449028,
0.011937710456550121,
-0.07522808015346527,
0.00927275512367487,
0.0865275040268898,
-0.10274670273065567,
0.05081094056367874,
0.032795440405607224,
-0.05371066927909851,
0.03654506802558899,
0.0336772... |
fb2bf220-4032-4772-8872-358e921f32a3 | Configuration Example with rules types {#configuration-typed-example}
xml
<graphite_rollup>
<version_column_name>Version</version_column_name>
<pattern>
<rule_type>plain</rule_type>
<regexp>click_cost</regexp>
<function>any</function>
<retention>
<age>0</age>
<precision>5</precision>
</retention>
<retention>
<age>86400</age>
<precision>60</precision>
</retention>
</pattern>
<pattern>
<rule_type>tagged</rule_type>
<regexp>^((.*)|.)min\?</regexp>
<function>min</function>
<retention>
<age>0</age>
<precision>5</precision>
</retention>
<retention>
<age>86400</age>
<precision>60</precision>
</retention>
</pattern>
<pattern>
<rule_type>tagged</rule_type>
<regexp><![CDATA[^someName\?(.*&)*tag1=value1(&|$)]]></regexp>
<function>min</function>
<retention>
<age>0</age>
<precision>5</precision>
</retention>
<retention>
<age>86400</age>
<precision>60</precision>
</retention>
</pattern>
<pattern>
<rule_type>tag_list</rule_type>
<regexp>someName;tag2=value2</regexp>
<retention>
<age>0</age>
<precision>5</precision>
</retention>
<retention>
<age>86400</age>
<precision>60</precision>
</retention>
</pattern>
<default>
<function>max</function>
<retention>
<age>0</age>
<precision>60</precision>
</retention>
<retention>
<age>3600</age>
<precision>300</precision>
</retention>
<retention>
<age>86400</age>
<precision>3600</precision>
</retention>
</default>
</graphite_rollup>
:::note
Data rollup is performed during merges. Usually, for old partitions, merges are not started, so for rollup it is necessary to trigger an unscheduled merge using
optimize
. Or use additional tools, for example
graphite-ch-optimizer
.
::: | {"source_file": "graphitemergetree.md"} | [
-0.019689487293362617,
0.11455705761909485,
-0.02215084806084633,
-0.045356590300798416,
-0.031972598284482956,
0.0039058614056557417,
0.0550517775118351,
0.021335013210773468,
-0.08958139270544052,
0.0859423503279686,
0.12481509894132614,
-0.0723452940583229,
0.060494158416986465,
-0.0155... |
5bff3bd4-fccf-4293-86c0-e8f69de0cea5 | description: 'differs from MergeTree in that it removes duplicate entries with the
same sorting key value (
ORDER BY
table section, not
PRIMARY KEY
).'
sidebar_label: 'ReplacingMergeTree'
sidebar_position: 40
slug: /engines/table-engines/mergetree-family/replacingmergetree
title: 'ReplacingMergeTree table engine'
doc_type: 'reference'
ReplacingMergeTree table engine
The engine differs from
MergeTree
in that it removes duplicate entries with the same
sorting key
value (
ORDER BY
table section, not
PRIMARY KEY
).
Data deduplication occurs only during a merge. Merging occurs in the background at an unknown time, so you can't plan for it. Some of the data may remain unprocessed. Although you can run an unscheduled merge using the
OPTIMIZE
query, do not count on using it, because the
OPTIMIZE
query will read and write a large amount of data.
Thus,
ReplacingMergeTree
is suitable for clearing out duplicate data in the background in order to save space, but it does not guarantee the absence of duplicates.
:::note
A detailed guide on ReplacingMergeTree, including best practices and how to optimize performance, is available
here
.
:::
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = ReplacingMergeTree([ver [, is_deleted]])
[PARTITION BY expr]
[ORDER BY expr]
[PRIMARY KEY expr]
[SAMPLE BY expr]
[SETTINGS name=value, ...]
For a description of request parameters, see
statement description
.
:::note
Uniqueness of rows is determined by the
ORDER BY
table section, not
PRIMARY KEY
.
:::
ReplacingMergeTree parameters {#replacingmergetree-parameters}
ver
{#ver}
ver
β column with the version number. Type
UInt*
,
Date
,
DateTime
or
DateTime64
. Optional parameter.
When merging,
ReplacingMergeTree
from all the rows with the same sorting key leaves only one:
The last in the selection, if
ver
not set. A selection is a set of rows in a set of parts participating in the merge. The most recently created part (the last insert) will be the last one in the selection. Thus, after deduplication, the very last row from the most recent insert will remain for each unique sorting key.
With the maximum version, if
ver
specified. If
ver
is the same for several rows, then it will use "if
ver
is not specified" rule for them, i.e. the most recent inserted row will remain.
Example:
``sql
-- without ver - the last inserted 'wins'
CREATE TABLE myFirstReplacingMT
(
key
Int64,
someCol
String,
eventTime` DateTime
)
ENGINE = ReplacingMergeTree
ORDER BY key;
INSERT INTO myFirstReplacingMT Values (1, 'first', '2020-01-01 01:01:01');
INSERT INTO myFirstReplacingMT Values (1, 'second', '2020-01-01 00:00:00');
SELECT * FROM myFirstReplacingMT FINAL; | {"source_file": "replacingmergetree.md"} | [
-0.02417718805372715,
0.021216847002506256,
0.10089269280433655,
-0.010280340909957886,
-0.0005882348632439971,
-0.07112731039524078,
-0.05045546591281891,
0.010875300504267216,
0.008857471868395805,
0.05202646180987358,
0.08940237760543823,
0.09433542191982269,
0.006755856331437826,
-0.12... |
dfafbc0e-ac96-4117-9552-928c3fbc105f | INSERT INTO myFirstReplacingMT Values (1, 'first', '2020-01-01 01:01:01');
INSERT INTO myFirstReplacingMT Values (1, 'second', '2020-01-01 00:00:00');
SELECT * FROM myFirstReplacingMT FINAL;
ββkeyββ¬βsomeColββ¬βββββββββββeventTimeββ
β 1 β second β 2020-01-01 00:00:00 β
βββββββ΄ββββββββββ΄ββββββββββββββββββββββ
-- with ver - the row with the biggest ver 'wins'
CREATE TABLE mySecondReplacingMT
(
key
Int64,
someCol
String,
eventTime
DateTime
)
ENGINE = ReplacingMergeTree(eventTime)
ORDER BY key;
INSERT INTO mySecondReplacingMT Values (1, 'first', '2020-01-01 01:01:01');
INSERT INTO mySecondReplacingMT Values (1, 'second', '2020-01-01 00:00:00');
SELECT * FROM mySecondReplacingMT FINAL;
ββkeyββ¬βsomeColββ¬βββββββββββeventTimeββ
β 1 β first β 2020-01-01 01:01:01 β
βββββββ΄ββββββββββ΄ββββββββββββββββββββββ
```
is_deleted
{#is_deleted}
is_deleted
β Name of a column used during a merge to determine whether the data in this row represents the state or is to be deleted;
1
is a "deleted" row,
0
is a "state" row.
Column data type β
UInt8
.
:::note
is_deleted
can only be enabled when
ver
is used.
No matter the operation on the data, the version should be increased. If two inserted rows have the same version number, the last inserted row is kept.
By default, ClickHouse will keep the last row for a key even if that row is a delete row. This is so that any future rows with lower versions can
be safely inserted and the delete row will still be applied.
To permanently drop such delete rows, enable the table setting
allow_experimental_replacing_merge_with_cleanup
and either:
Set the table settings
enable_replacing_merge_with_cleanup_for_min_age_to_force_merge
,
min_age_to_force_merge_on_partition_only
and
min_age_to_force_merge_seconds
. If all parts in a partition are older than
min_age_to_force_merge_seconds
, ClickHouse will merge them
all into a single part and remove any delete rows.
Manually run
OPTIMIZE TABLE table [PARTITION partition | PARTITION ID 'partition_id'] FINAL CLEANUP
.
:::
Example:
``sql
-- with ver and is_deleted
CREATE OR REPLACE TABLE myThirdReplacingMT
(
key
Int64,
someCol
String,
eventTime
DateTime,
is_deleted` UInt8
)
ENGINE = ReplacingMergeTree(eventTime, is_deleted)
ORDER BY key
SETTINGS allow_experimental_replacing_merge_with_cleanup = 1;
INSERT INTO myThirdReplacingMT Values (1, 'first', '2020-01-01 01:01:01', 0);
INSERT INTO myThirdReplacingMT Values (1, 'first', '2020-01-01 01:01:01', 1);
select * from myThirdReplacingMT final;
0 rows in set. Elapsed: 0.003 sec.
-- delete rows with is_deleted
OPTIMIZE TABLE myThirdReplacingMT FINAL CLEANUP;
INSERT INTO myThirdReplacingMT Values (1, 'first', '2020-01-01 00:00:00', 0);
select * from myThirdReplacingMT final;
ββkeyββ¬βsomeColββ¬βββββββββββeventTimeββ¬βis_deletedββ
β 1 β first β 2020-01-01 00:00:00 β 0 β
βββββββ΄ββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββ
``` | {"source_file": "replacingmergetree.md"} | [
-0.007132177706807852,
-0.04256029799580574,
0.08337829262018204,
0.013420242816209793,
-0.027439311146736145,
0.07231421023607254,
0.026461191475391388,
-0.006468909326940775,
0.06297622621059418,
0.058063529431819916,
0.0016964820679277182,
-0.034133847802877426,
-0.015188085846602917,
-... |
acdd53a3-56a3-46b1-8cd7-1df1907a06ec | ββkeyββ¬βsomeColββ¬βββββββββββeventTimeββ¬βis_deletedββ
β 1 β first β 2020-01-01 00:00:00 β 0 β
βββββββ΄ββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββ
```
Query clauses {#query-clauses}
When creating a
ReplacingMergeTree
table the same
clauses
are required, as when creating a
MergeTree
table.
Deprecated Method for Creating a Table
:::note
Do not use this method in new projects and, if possible, switch old projects to the method described above.
:::
```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE [=] ReplacingMergeTree(date-column [, sampling_expression], (primary, key), index_granularity, [ver])
```
All of the parameters excepting `ver` have the same meaning as in `MergeTree`.
- `ver` - column with the version. Optional parameter. For a description, see the text above.
Query time de-duplication & FINAL {#query-time-de-duplication--final}
At merge time, the ReplacingMergeTree identifies duplicate rows, using the values of the
ORDER BY
columns (used to create the table) as a unique identifier, and retains only the highest version. This, however, offers eventual correctness only - it does not guarantee rows will be deduplicated, and you should not rely on it. Queries can, therefore, produce incorrect answers due to update and delete rows being considered in queries.
To obtain correct answers, users will need to complement background merges with query time deduplication and deletion removal. This can be achieved using the
FINAL
operator. For example, consider the following example:
``sql
CREATE TABLE rmt_example
(
number` UInt16
)
ENGINE = ReplacingMergeTree
ORDER BY number
INSERT INTO rmt_example SELECT floor(randUniform(0, 100)) AS number
FROM numbers(1000000000)
0 rows in set. Elapsed: 19.958 sec. Processed 1.00 billion rows, 8.00 GB (50.11 million rows/s., 400.84 MB/s.)
``
Querying without
FINAL` produces an incorrect count (exact result will vary depending on merges):
```sql
SELECT count()
FROM rmt_example
ββcount()ββ
β 200 β
βββββββββββ
1 row in set. Elapsed: 0.002 sec.
```
Adding final produces a correct result:
```sql
SELECT count()
FROM rmt_example
FINAL
ββcount()ββ
β 100 β
βββββββββββ
1 row in set. Elapsed: 0.002 sec.
```
For further details on
FINAL
, including how to optimize
FINAL
performance, we recommend reading our
detailed guide on ReplacingMergeTree
. | {"source_file": "replacingmergetree.md"} | [
-0.0015597967430949211,
0.009817278012633324,
0.045494962483644485,
-0.013019958510994911,
-0.020851287990808487,
-0.03531802073121071,
-0.034988779574632645,
0.03638084605336189,
-0.025995923206210136,
0.06432235240936279,
0.04253308102488518,
-0.07913662493228912,
0.02996390126645565,
-0... |
f552c25f-fc3a-4d5d-91f6-89599ef6fb59 | sidebar_label: 'Cloud support'
title: 'Support'
slug: /cloud/support
description: 'Learn about Cloud Support'
keywords: ['ClickHouse Cloud', 'cloud support', 'customer support', 'technical assistance', 'managed service support']
hide_title: true
doc_type: 'guide'
import Content from '@site/docs/about-us/support.md'; | {"source_file": "09_support.md"} | [
-0.05908029153943062,
0.053492601960897446,
0.023232977837324142,
0.03450307622551918,
0.11073616147041321,
0.014259844087064266,
0.020529482513666153,
-0.024633090943098068,
-0.06272346526384354,
0.007237753365188837,
0.023365529254078865,
0.01309754978865385,
0.05669951066374779,
-0.0851... |
6bf3710d-2599-4d29-b36f-68adcb276552 | sidebar_label: 'Security'
slug: /cloud/security
title: 'Security'
description: 'Learn more about securing ClickHouse Cloud and BYOC'
doc_type: 'reference'
keywords: ['security', 'cloud security', 'access control', 'compliance', 'data protection']
ClickHouse Cloud Security
This document details the security options and best practices available for ClickHouse organization and service protection.
ClickHouse is dedicated to providing secure analytical database solutions; therefore, safeguarding data and service integrity is a priority.
The information herein covers various methods designed to assist users in securing their ClickHouse environments.
Cloud Console Authentication {#cloud-console-auth}
Password Authentication {#password-auth}
ClickHouse Cloud console passwords are configured to NIST 800-63B standards with a minimum of 12 characters and 3 of 4 complexity requirements: upper case characters, lower case characters, numbers and/or special characters.
Learn more about
password authentication
.
Social Single Sign-On (SSO) {#social-sso}
ClickHouse Cloud supports Google or Microsoft social authentication for single sign-on (SSO).
Learn more about
social SSO
.
Multi-Factor Authentication {#mfa}
Users using email and password or social SSO may also configure multi-factor authentication utilizing an authenticator app such as Authy or Google Authenticator.
Learn more about
multi-factor authentication
.
Security Assertion Markup Language (SAML) Authentication {#saml-auth}
Enterprise customers may configure SAML authentication.
Learn more about
SAML authentication
.
API Authentication {#api-auth}
Customers may configure API keys for use with OpenAPI, Terraform and Query API endpoints.
Learn more about
API authentication
.
Database Authentication {#database-auth}
Database Password Authentication {#db-password-auth}
ClickHouse database user passwords are configured to NIST 800-63B standards with a minimum of 12 characters and complexity requirements: upper case characters, lower case characters, numbers and/or special characters.
Learn more about
database password authentication
.
Secure Shell (SSH) Database Authentication {#ssh-auth}
ClickHouse database users may be configured to use SSH authentication.
Learn more about
SSH authentication
.
Access Control {#access-control}
Console Role-Based Access Control (RBAC) {#console-rbac}
ClickHouse Cloud supports role assignment for organization, service and database permissions. Database permissions using this method are supported in SQL console only.
Learn more about
console RBAC
.
Database User Grants {#database-user-grants}
ClickHouse databases support granular permission management and role-based access via user grants.
Learn more about
database user grants
.
Network Security {#network-security}
IP Filters {#ip-filters}
Configure IP filters to limit inbound connections to your ClickHouse service. | {"source_file": "06_security.md"} | [
-0.04981623962521553,
-0.014170601963996887,
-0.06640174984931946,
-0.060633134096860886,
0.012451213784515858,
0.031583115458488464,
0.07941239327192307,
-0.04206991195678711,
0.003531634109094739,
0.03703782334923744,
0.0281972698867321,
0.06672351807355881,
0.05706120282411575,
-0.00720... |
f4babb91-5216-448c-a5e2-c1c054309079 | Learn more about
database user grants
.
Network Security {#network-security}
IP Filters {#ip-filters}
Configure IP filters to limit inbound connections to your ClickHouse service.
Learn more about
IP filters
.
Private Connectivity {#private-connectivity}
Connect to your ClickHouse clusters from AWS, GCP or Azure using private connectivity.
Learn more about
private connectivity
.
Encryption {#encryption}
Storage Level Encryption {#storage-encryption}
ClickHouse Cloud encrypts data at rest by default using cloud provider-managed AES 256 keys.
Learn more about
storage encryption
.
Transparent Data Encryption {#tde}
In addition to storage encryption, ClickHouse Cloud Enterprise customers may enable database level transparent data encryption for additional protection.
Learn more about
transparent data encryption
.
Customer Managed Encryption Keys {#cmek}
ClickHouse Cloud Enterprise customers may use their own key for database level encryption.
Learn more about
customer managed encryption keys
.
Auditing and Logging {#auditing-logging}
Console Audit Log {#console-audit-log}
Activities within the console are logged. Logs are available for review and export.
Learn more about
console audit logs
.
Database Audit Logs {#database-audit-logs}
Activities within the database are logged. Logs are available for review and export.
Learn more about
database audit logs
.
BYOC Security Playbook {#byoc-security-playbook}
Sample detection queries for security teams managing ClickHouse BYOC instances.
Learn more about the
BYOC security playbook
.
Compliance {#compliance}
Security and Compliance Reports {#compliance-reports}
ClickHouse maintains a strong security and compliance program. Check back periodically for new third party audit reports.
Learn more about
security and compliance reports
.
HIPAA Compliant Services {#hipaa-compliance}
ClickHouse Cloud Enterprise customers may deploy services housing protected health information (PHI) to HIPAA compliant regions after signing a Business Associate Agreement (BAA).
Learn more about
HIPAA compliance
.
PCI Compliant Services {#pci-compliance}
ClickHouse Cloud Enterprise customers may deploy services housing credit card information to PCI compliant regions.
Learn more about
PCI compliance
. | {"source_file": "06_security.md"} | [
0.029816413298249245,
-0.0393100269138813,
-0.02691003307700157,
0.00038723411853425205,
-0.010920345783233643,
0.03580593690276146,
0.0940578430891037,
-0.048562802374362946,
0.006373072043061256,
0.0453493595123291,
0.004837528802454472,
0.019868893548846245,
0.044851597398519516,
-0.028... |
d2db7da0-3f44-4eae-8350-754a1acd9c0c | sidebar_label: 'Backups'
slug: /cloud/features/backups
title: 'Backups'
keywords: ['backups', 'cloud backups', 'restore']
description: 'Provides an overview of backup features in ClickHouse Cloud'
doc_type: 'reference'
import Image from '@theme/IdealImage';
import ScalePlanFeatureBadge from '@theme/badges/ScalePlanFeatureBadge';
import EnterprisePlanFeatureBadge from '@theme/badges/EnterprisePlanFeatureBadge'
import backup_chain from '@site/static/images/cloud/manage/backup-chain.png';
Database backups provide a safety net by ensuring that if data is lost for any unforeseen reason, the service can be restored to a previous state from the last successful backup.
This minimizes downtime and prevents business critical data from being permanently lost.
Backups {#backups}
How backups work in ClickHouse Cloud {#how-backups-work-in-clickhouse-cloud}
ClickHouse Cloud backups are a combination of "full" and "incremental" backups that constitute a backup chain. The chain starts with a full backup, and incremental backups are then taken over the next several scheduled time periods to create a sequence of backups. Once a backup chain reaches a certain length, a new chain is started. This entire chain of backups can then be utilized to restore data to a new service if needed. Once all backups included in a specific chain are past the retention time frame set for the service (more on retention below), the chain is discarded.
In the screenshot below, the solid line squares show full backups and the dotted line squares show incremental backups. The solid line rectangle around the squares denotes the retention period and the backups that are visible to the end user, which can be used for a backup restore. In the scenario below, backups are being taken every 24 hours and are retained for 2 days.
On Day 1, a full backup is taken to start the backup chain. On Day 2, an incremental backup is taken, and we now have a full and incremental backup available to restore from. By Day 7, we have one full backup and six incremental backups in the chain, with the most recent two incremental backups visible to the user. On Day 8, we take a new full backup, and on Day 9, once we have two backups in the new chain, the previous chain is discarded.
Default backup policy {#default-backup-policy}
In the Basic, Scale, and Enterprise tiers, backups are metered and billed separately from storage.
All services will default to one daily backup with the ability to configure more, starting with the Scale tier, via the Settings tab of the Cloud console.
Each backup will be retained for at least 24 hours.
See
"Review and restore backups"
for further details.
Configurable backups {#configurable-backups}
ClickHouse Cloud allows you to configure the schedule for your backups for
Scale
and
Enterprise
tier services. Backups can be configured along the following dimensions based on your business needs. | {"source_file": "08_backups.md"} | [
-0.0872873067855835,
0.02578185312449932,
-0.0013290096540004015,
0.02956775762140751,
0.05972236022353172,
0.03930729627609253,
0.01878979615867138,
-0.03359309211373329,
0.004226052202284336,
-0.0032927407883107662,
0.04652109742164612,
0.046189986169338226,
0.11053698509931564,
-0.09393... |
13cccf1d-ce55-4e97-90c5-09adb00f2bf7 | Retention
: The duration of days, for which each backup will be retained. Retention can be specified as low as 1 day, and as high as 30 days with several values to pick in between.
Frequency
: The frequency allows you to specify the time duration between subsequent backups. For instance, a frequency of "every 12 hours" means that backups will be spaced 12 hours apart. Frequency can range from "every 6 hours" to "every 48 hours" in the following hourly increments:
6
,
8
,
12
,
16
,
20
,
24
,
36
,
48
.
Start Time
: The start time for when you want to schedule backups each day. Specifying a start time implies that the backup "Frequency" will default to once every 24 hours. Clickhouse Cloud will start the backup within an hour of the specified start time.
:::note
The custom schedule will override the default backup policy in ClickHouse Cloud for your given service.
In some rare scenarios, the backup scheduler will not respect the
Start Time
specified for backups. Specifically, this happens if there was a successful backup triggered < 24 hours from the time of the currently scheduled backup. This could happen due to a retry mechanism we have in place for backups. In such instances, the scheduler will skip over the backup for the current day, and will retry the backup the next day at the scheduled time.
:::
See
"Configure backup schedules"
for steps to configure your backups.
Bring Your Own Bucket (BYOB) Backups {#byob}
ClickHouse Cloud allows exporting backups to your own cloud service provider (CSP) account storage (AWS S3, Google Cloud Storage, or Azure Blob Storage).
If you configure backups to your own bucket, ClickHouse Cloud will still take daily backups to its own bucket.
This is to ensure that we have at least one copy of the data to restore from in case the backups in your bucket get corrupted.
For details of how ClickHouse Cloud backups work, see the
backups
docs.
In this guide, we walk through how you can export backups to your AWS, GCP, Azure object storage, as well as how to restore these backups in your account to a new ClickHouse Cloud service.
We also share backup / restore commands that allow you to export backups to your bucket and restore them.
:::note Cross-region backups
Users should be aware that any usage where backups are being exported to a
different region in the same cloud provider will incur
data transfer
charges.
Currently, we do not support cross-cloud backups, nor backup / restore for services utilizing
Transparent Data Encryption (TDE)
or for regulated services.
:::
See
"Export backups to your own Cloud account"
for examples of how to take full and incremental backups to AWS, GCP, Azure object storage as well as how to restore from the backups.
Backup options {#backup-options}
To export backups to your own cloud account, you have two options:
Via Cloud Console UI {#via-ui} | {"source_file": "08_backups.md"} | [
-0.06742223352193832,
-0.02441929280757904,
-0.02660163678228855,
0.05282061547040939,
0.00800748448818922,
0.011565024964511395,
-0.019032463431358337,
-0.03769483044743538,
0.07315917313098907,
-0.019628997892141342,
0.018716102465987206,
0.05018291622400284,
0.051183976233005524,
-0.032... |
7c3c3695-3093-40de-a264-6af7365e0ed1 | Backup options {#backup-options}
To export backups to your own cloud account, you have two options:
Via Cloud Console UI {#via-ui}
External backups can be
configured in the UI
.
By default, backups will then be taken daily (as specified in the
default backup policy
).
However, we also support
configurable
backups to your own cloud account, which allows for setting a custom schedule.
It is important to note that all backups to your bucket are full backups with no relationship to other previous or future backups.
Using SQL commands {#using-commands}
You can use
SQL commands
to export backups to your bucket.
:::warning
ClickHouse Cloud will not manage the lifecycle of backups in customer buckets.
Customers are responsible for ensuring that backups in their bucket are managed appropriately for adhering to compliance standards as well as managing cost.
If the backups are corrupted, they will not be able to be restored.
::: | {"source_file": "08_backups.md"} | [
-0.062024157494306564,
-0.06227050721645355,
-0.009792819619178772,
0.05992376059293747,
0.01660958305001259,
0.03744536265730858,
0.016910726204514503,
-0.01018164586275816,
-0.008371902629733086,
0.05338560789823532,
-0.0050292531959712505,
-0.013570854440331459,
0.06566775590181351,
-0.... |
f999f620-7729-4472-8b63-ca85a48431e9 | sidebar_label: 'Integrations'
slug: /manage/integrations
title: 'Integrations'
description: 'Integrations for ClickHouse'
doc_type: 'landing-page'
keywords: ['integrations', 'cloud features', 'third-party tools', 'data sources', 'connectors']
import Kafkasvg from '@site/static/images/integrations/logos/kafka.svg';
import Confluentsvg from '@site/static/images/integrations/logos/confluent.svg';
import Msksvg from '@site/static/images/integrations/logos/msk.svg';
import Azureeventhubssvg from '@site/static/images/integrations/logos/azure_event_hubs.svg';
import Warpstreamsvg from '@site/static/images/integrations/logos/warpstream.svg';
import S3svg from '@site/static/images/integrations/logos/amazon_s3_logo.svg';
import AmazonKinesis from '@site/static/images/integrations/logos/amazon_kinesis_logo.svg';
import Gcssvg from '@site/static/images/integrations/logos/gcs.svg';
import DOsvg from '@site/static/images/integrations/logos/digitalocean.svg';
import ABSsvg from '@site/static/images/integrations/logos/azureblobstorage.svg';
import Postgressvg from '@site/static/images/integrations/logos/postgresql.svg';
import Mysqlsvg from '@site/static/images/integrations/logos/mysql.svg';
import Mongodbsvg from '@site/static/images/integrations/logos/mongodb.svg';
import redpanda_logo from '@site/static/images/integrations/logos/logo_redpanda.png';
import clickpipes_stack from '@site/static/images/integrations/data-ingestion/clickpipes/clickpipes_stack.png';
import cp_custom_role from '@site/static/images/integrations/data-ingestion/clickpipes/cp_custom_role.png';
import Image from '@theme/IdealImage';
ClickHouse Cloud allows you to connect the tools and services that you love.
Managed integration pipelines for ClickHouse Cloud {#clickpipes}
ClickPipes is a managed integration platform that makes ingesting data from a diverse set of sources as simple as clicking a few buttons.
Designed for the most demanding workloads, ClickPipes's robust and scalable architecture ensures consistent performance and reliability.
ClickPipes can be used for long-term streaming needs or one-time data loading job. | {"source_file": "02_integrations.md"} | [
0.004110013600438833,
0.012866916134953499,
-0.06305427849292755,
0.011817876249551773,
0.05911187827587128,
-0.003357852343469858,
0.01168360561132431,
-0.006205794867128134,
0.004172943066805601,
0.07570812106132507,
0.017204618081450462,
-0.0395064651966095,
0.025768019258975983,
-0.028... |
826fd0d9-ce69-4747-bf1d-5e81cadbb568 | | Name | Logo |Type| Status | Description |
|----------------------------------------------------|--------------------------------------------------------------------------------------------------|----|------------------|------------------------------------------------------------------------------------------------------|
|
Apache Kafka
|
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Apache Kafka into ClickHouse Cloud. |
| Confluent Cloud |
|Streaming| Stable | Unlock the combined power of Confluent and ClickHouse Cloud through our direct integration. |
| Redpanda |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Redpanda into ClickHouse Cloud. |
| AWS MSK |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from AWS MSK into ClickHouse Cloud. |
| Azure Event Hubs |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Azure Event Hubs into ClickHouse Cloud. |
| WarpStream |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from WarpStream into ClickHouse Cloud. |
| Amazon S3 |
|Object Storage| Stable | Configure ClickPipes to ingest large volumes of data from object storage. |
| Google Cloud Storage |
|Object Storage| Stable | Configure ClickPipes to ingest large volumes of data from object storage. |
| DigitalOcean Spaces |
| Object Storage | Stable | Configure ClickPipes to ingest large volumes of data from object storage.
| Azure Blob Storage |
| Object Storage | Private Beta | Configure ClickPipes to ingest large volumes of data from object storage.
|
Amazon Kinesis
|
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Amazon Kinesis into ClickHouse cloud. |
|
Postgres
|
|DBMS| Stable | Configure ClickPipes and start ingesting data from Postgres into ClickHouse Cloud. |
|
MySQL
|
|DBMS| Private Beta | Configure ClickPipes and start ingesting data from MySQL into ClickHouse Cloud. |
| | {"source_file": "02_integrations.md"} | [
-0.0023011094890534878,
0.08496814221143723,
-0.015909554436802864,
0.003183495718985796,
-0.03347279876470566,
0.0332457609474659,
0.024602554738521576,
-0.0024384886492043734,
0.014942298643290997,
-0.04490828514099121,
0.019827865064144135,
-0.013515936210751534,
0.003529396140947938,
-... |
dde83737-8640-40b4-a2cf-038ce7922480 | |
MySQL
|
|DBMS| Private Beta | Configure ClickPipes and start ingesting data from MySQL into ClickHouse Cloud. |
|
MongoDB
|
|DBMS| Private Preview | Configure ClickPipes and start ingesting data from MongoDB into ClickHouse Cloud. | | {"source_file": "02_integrations.md"} | [
0.05698079243302345,
-0.05904499813914299,
-0.0960875004529953,
0.039111893624067307,
-0.010418902151286602,
-0.04233931377530098,
0.005208349321037531,
0.019787423312664032,
-0.03335815668106079,
0.055243365466594696,
0.0075417254120111465,
-0.02919427491724491,
0.04172549024224281,
-0.00... |
5b85e3e6-a1bc-41f6-9740-40d3ed15dbc6 | Language client integrations {#language-client-integrations}
ClickHouse offers a number of language client integrations, for which the documentation for each is linked below.
| Page | Description |
|-------------------------------------------------------------------------|----------------------------------------------------------------------------------|
|
C++
| C++ Client Library and userver Asynchronous Framework |
|
C#
| Learn how to connect your C# projects to ClickHouse. |
|
Go
| Learn how to connect your Go projects to ClickHouse. |
|
JavaScript
| Learn how to connect your JS projects to ClickHouse with the official JS client. |
|
Java
| Learn more about several integrations for Java and ClickHouse. |
|
Python
| Learn how to connect your Python projects to ClickHouse. |
|
Rust
| Learn how to connect your Rust projects to ClickHouse. |
|
Third-party clients
| Learn more about client libraries from third party developers. |
In addition to ClickPipes, language clients, ClickHouse supports a host of other integrations, spanning core integrations,
partner integrations and community integrations.
For a complete list see the
"Integrations"
section of the docs. | {"source_file": "02_integrations.md"} | [
-0.0609075166285038,
-0.03688213974237442,
-0.06526760011911392,
-0.002999732503667474,
-0.07155057787895203,
0.07821311801671982,
0.044607214629650116,
0.0563092902302742,
-0.02266472950577736,
-0.07425444573163986,
-0.004084797576069832,
-0.013491148129105568,
0.01818506047129631,
0.0253... |
57293580-b9e0-4470-b54d-f6469ae179b4 | sidebar_label: 'ClickHouse Cloud tiers'
slug: /cloud/manage/cloud-tiers
title: 'ClickHouse Cloud Tiers'
description: 'Cloud tiers available in ClickHouse Cloud'
keywords: ['cloud tiers', 'service plans', 'cloud pricing tiers', 'cloud service levels']
doc_type: 'reference'
ClickHouse Cloud tiers
There are several tiers available in ClickHouse Cloud.
Tiers are assigned at any organizational level. Services within an organization therefore belong to the same tier.
This page discusses which tiers are right for your specific use case.
Summary of cloud tiers:
[Basic](#basic)
[Scale (Recommended)](#scale)
[Enterprise](#enterprise)
**Service Features**
Number of services
β Unlimited
β Unlimited
β Unlimited
Storage
β Maximum of 1 TB / Service
β Unlimited
β Unlimited
Memory
β 8-12 GiB total memory
β Configurable
β Configurable
Availability
β 1 zone
β 2+ zones
β 2+ zones
Backups
β 1 backup every 24h, retained for 1 day
β Configurable
β Configurable
Vertical scaling
β Automatic Scaling
β Automatic for standard profiles, manual for custom profiles
Horizontal scaling
β Manual Scaling
β Manual Scaling
ClickPipes
β
β
β
Early upgrades
β
β
Compute-compute separation
β
β
Export backups to your own cloud account
β
Scheduled upgrades
β
Custom hardware profiles
β
**Security**
SAML/SSO
β
MFA
β
β
β
SOC 2 Type II
β
β
β
ISO 27001
β
β
β
Private Networking
β
β
S3 role based access
β
β
Transparent data encryption (CMEK for TDE)
β
HIPAA
β
Basic {#basic}
Cost-effective option that supports single-replica deployments.
Ideal for departmental use cases with smaller data volumes that do not have hard reliability guarantees.
:::note
Services in the basic tier are meant to be fixed in size and do not allow scaling, both automatic and manual.
You can upgrade to the Scale or Enterprise tier to scale their services.
:::
Scale {#scale}
Designed for workloads requiring enhanced SLAs (2+ replica deployments), scalability, and advanced security.
Offers support for features such as:
Private networking support
.
Compute-compute separation
.
Flexible scaling
options (scale up/down, in/out).
Configurable backups
Enterprise {#enterprise}
Caters to large-scale, mission critical deployments that have stringent security and compliance needs.
Everything in Scale,
plus
Flexible scaling: standard profiles (
1:4 vCPU:memory ratio
), as well as
HighMemory (1:8 ratio)
and
HighCPU (1:2 ratio)
custom profiles.
Provides the highest levels of performance and reliability guarantees.
Supports enterprise-grade security:
Single Sign On (SSO) | {"source_file": "01_cloud_tiers.md"} | [
-0.050696730613708496,
-0.03017149679362774,
-0.03848829120397568,
-0.0035850126296281815,
-0.05168791115283966,
-0.021207712590694427,
-0.01208198256790638,
0.02408014051616192,
-0.03530556336045265,
0.0027032180223613977,
0.01517451461404562,
-0.022050537168979645,
0.04040620103478432,
-... |
323c1917-2bc8-44a4-82eb-607ed7a76670 | Provides the highest levels of performance and reliability guarantees.
Supports enterprise-grade security:
Single Sign On (SSO)
Enhanced Encryption: For AWS and GCP services. Services are encrypted by our key by default and can be rotated to their key to enable Customer Managed Encryption Keys (CMEK).
Allows Scheduled upgrades: you can select the day of the week/time window for upgrades, both database and cloud releases.
Offers
HIPAA
and PCI compliance.
Exports Backups to the user's account.
:::note
Single replica services across all three tiers are meant to be fixed in size (
8 GiB
,
12 GiB
)
:::
Upgrading to a different tier {#upgrading-to-a-different-tier}
You can always upgrade from Basic to Scale or from Scale to Enterprise. Downgrading tiers will require disabling premium features.
If you have any questions about service types, please see the
pricing page
or contact support@clickhouse.com. | {"source_file": "01_cloud_tiers.md"} | [
-0.07799626886844635,
0.006253328640013933,
0.03527074307203293,
-0.06785743683576584,
-0.002677003853023052,
0.03465939313173294,
-0.025547677651047707,
0.007142453920096159,
-0.008240122348070145,
0.04690413922071457,
-0.04556591808795929,
0.025990640744566917,
0.03924710676074028,
-0.07... |
e8935dc1-69f4-452d-b22e-7a7d11ee6571 | slug: /cloud/get-started
title: 'Get started with ClickHouse Cloud'
description: 'Complete guide to getting started with ClickHouse Cloud - from discovering features to deployment and optimization'
hide_title: true
doc_type: 'guide'
keywords: ['onboarding', 'getting started', 'cloud setup', 'quickstart', 'introduction']
Get started with ClickHouse Cloud
New to ClickHouse Cloud and not sure where to begin? In this section of the docs,
we'll walk you through everything you need to get up and running quickly. We've
arranged this getting started section into three subsections to help guide
you through each step of the process as you explore ClickHouse Cloud.
Discover ClickHouse Cloud {#discover-clickhouse-cloud}
Learn
about what ClickHouse Cloud is, and how it differs from the open-source version
Discover
the main use-cases of ClickHouse Cloud
Get set up with ClickHouse Cloud {#get-set-up-with-clickhouse-cloud}
Now that you know what ClickHouse Cloud is, we'll walk you through the process
of getting your data into ClickHouse Cloud, show you the main features available
and point you towards some general best practices you should know.
Topics include:
Migration guides
from various platforms
Tune your ClickHouse Cloud deployment {#evaluate-clickhouse-cloud}
Now that your data is in ClickHouse Cloud, we'll walk you through some more advanced
topics to help you get the most out of your ClickHouse Cloud experience and explore
what the platform has to offer.
Topics include:
Query performance and optimization
Monitoring
Security considerations
Troubleshooting tips | {"source_file": "index.md"} | [
0.006140126846730709,
-0.047059375792741776,
0.022031472995877266,
0.022659849375486374,
0.05907341092824936,
-0.02207138016819954,
0.02997196465730667,
-0.03685886785387993,
-0.05091634765267372,
0.028431838378310204,
0.057055193930864334,
-0.013476685620844364,
0.03349713236093521,
-0.05... |
ff541b0c-a35f-46ea-9ff7-c1d532c1eb9a | slug: /cloud/guides/production-readiness
sidebar_label: 'Production readiness'
title: 'ClickHouse Cloud production readiness guide'
description: 'Guide for organizations transitioning from quick start to enterprise-ready ClickHouse Cloud deployments'
keywords: ['production readiness', 'enterprise', 'saml', 'sso', 'terraform', 'monitoring', 'backup', 'disaster recovery']
doc_type: 'guide'
ClickHouse Cloud Production Readiness Guide {#production-readiness}
For organizations who have completed the quick start guide and have an active service with data flowing
:::note[TL;DR]
This guide helps you transition from quick start to enterprise-ready ClickHouse Cloud deployments. You'll learn how to:
Establish separate dev/staging/production environments for safe testing
Integrate SAML/SSO authentication with your identity provider
Automate deployments with Terraform or the Cloud API
Connect monitoring to your alerting infrastructure (Prometheus, PagerDuty)
Validate backup procedures and document disaster recovery processes
:::
Introduction {#introduction}
You have ClickHouse Cloud running successfully for your business workloads. Now you need to mature your deployment to meet enterprise production standardsβwhether triggered by a compliance audit, a production incident from an untested query, or IT requirements to integrate with corporate systems.
ClickHouse Cloud's managed platform handles infrastructure operations, automatic scaling, and system maintenance. Enterprise production readiness requires connecting ClickHouse Cloud to your broader IT environment through authentication systems, monitoring infrastructure, automation tools, and business continuity processes.
Your responsibilities for enterprise production readiness:
- Establish separate environments for safe testing before production deployment
- Integrate with existing identity providers and access management systems
- Connect monitoring and alerting to your operational infrastructure
- Implement infrastructure-as-code practices for consistent management
- Establish backup validation and disaster recovery procedures
- Configure cost management and billing integration
This guide walks you through each area, helping you transition from a working ClickHouse Cloud deployment to an enterprise-ready system.
Environment strategy {#environment-strategy}
Establish separate environments to safely test changes before impacting production workloads. Most production incidents trace back to untested queries or configuration changes deployed directly to production systems.
:::note
In ClickHouse Cloud, each environment is a separate service.
You'll provision distinct production, staging, and development services within your organization, each with its own compute resources, storage, and endpoint.
:::
Environment structure
: Maintain production (live workloads), staging (production-equivalent validation), and development (individual/team experimentation) environments. | {"source_file": "production-readiness.md"} | [
0.014338599517941475,
-0.06862727552652359,
0.02147013694047928,
-0.0041749319061636925,
-0.0005684663192369044,
-0.022684257477521896,
-0.02078220434486866,
-0.03478812053799629,
-0.03418080881237984,
0.04094380885362625,
0.047974102199077606,
-0.022524595260620117,
0.00552370073273778,
-... |
e5f86b67-db80-483c-bc85-3f9f2ace8f24 | Environment structure
: Maintain production (live workloads), staging (production-equivalent validation), and development (individual/team experimentation) environments.
Testing
: Test queries in staging before production deployment. Queries that work on small datasets often cause memory exhaustion, excessive CPU usage, or slow execution at production scale. Validate configuration changes including user permissions, quotas, and service settings in stagingβconfiguration errors discovered in production create immediate operational incidents.
Sizing
: Size your staging service to approximate production load characteristics. Testing on significantly smaller infrastructure may not reveal resource contention or scaling issues. Use production-representative datasets through periodic data refreshes or synthetic data generation. For guidance on how to size your staging environment and scale services appropriately, refer to the
Sizing and hardware recommendations
and
Scaling in ClickHouse Cloud
documentation. These resources provide practical advice on memory, CPU, and storage sizing, as well as details on vertical and horizontal scaling options to help you match your staging environment to production workloads.
Private networking {#private-networking}
Private networking
in ClickHouse Cloud allows you to connect your ClickHouse services directly to your cloud virtual network, ensuring that data does not traverse the public internet. This is essential for organizations with strict security or compliance requirements, or for those running applications in private subnets.
ClickHouse Cloud supports private networking through the following mechanisms:
AWS PrivateLink
: Enables secure connectivity between your VPC and ClickHouse Cloud without exposing traffic to the public internet. It supports cross-region connectivity and is available in the Scale and Enterprise plans. Setup involves creating a PrivateLink endpoint and adding it to your ClickHouse Cloud organization and service allow list. More details and step-by-step instructions are available in the documentation here.
GCP Private Service Connect
(PSC): Allows private access to ClickHouse Cloud from your Google Cloud VPC. Like AWS, it is available in Scale and Enterprise plans and requires explicit configuration of service endpoints and allow lists here.
Azure Private Link
: Provides private connectivity between your Azure VNet and ClickHouse Cloud, supporting cross-region connections. The setup process involves obtaining a connection alias, creating a private endpoint, and updating allow lists here.
If you need more technical details or step-by-step setup instructions, the linked documentation for each provider contains comprehensive guides.
Enterprise authentication and user management {#enterprise-authentication}
Moving from console-based user management to enterprise authentication integration is essential for production readiness. | {"source_file": "production-readiness.md"} | [
0.06659568846225739,
0.00946387555450201,
0.009533568285405636,
0.02520640380680561,
-0.006307905539870262,
-0.05634324252605438,
-0.07529255002737045,
-0.005855258088558912,
-0.05010051652789116,
0.0561569407582283,
-0.008190702646970749,
-0.042597446590662,
0.04787744581699371,
-0.021791... |
76764162-0a91-43c6-a12c-ff79da6ccf20 | Moving from console-based user management to enterprise authentication integration is essential for production readiness.
SSO and social authentication {#sso-authentication}
SAML SSO
: Enterprise tier ClickHouse Cloud supports SAML integration with identity providers including Okta, Azure Active Directory, and Google Workspace. SAML configuration requires coordination with ClickHouse support and involves providing your IdP metadata and configuring attribute mappings.
Social SSO
: ClickHouse Cloud also supports social authentication providers (Google, Microsoft, GitHub) as an equally secure alternative to SAML SSO. Social SSO provides faster setup for organizations without existing SAML infrastructure while maintaining enterprise security standards.
:::note Important limitation
Users authenticated through SAML or social SSO are assigned the "Member" role by default and must be manually granted additional roles by an admin after their first login. Group-to-role mapping and automatic role assignment are not currently supported.
:::
Access control design {#access-control-design}
ClickHouse Cloud uses organization-level roles (Admin, Developer, Billing, Member) and service/database-level roles (Service Admin, Read Only, SQL console roles). Design roles around job functions applying the principle of least privilege:
Application users
: Service accounts with specific database and table access
Analyst users
: Read-only access to curated datasets and reporting views
Admin users
: Full administrative capabilities
Configure quotas, limits, and settings profiles to manage resource usage for different users and roles. Set memory and execution time limits to prevent individual queries from impacting system performance. Monitor resource usage through audit, session, and query logs to identify users or applications that frequently hit limits. Conduct regular access reviews using ClickHouse Cloud's audit capabilities.
User lifecycle management limitations {#user-lifecycle-management}
ClickHouse Cloud does not currently support SCIM or automated provisioning/deprovisioning via identity providers. Users must be manually removed from the ClickHouse Cloud console after being removed from your IdP. Plan for manual user management processes until these features become available.
Learn more about
Cloud Access Management
and
SAML SSO setup
.
Infrastructure as code and automation {#infrastructure-as-code}
Managing ClickHouse Cloud through infrastructure-as-code practices and API automation provides consistency, version control, and repeatability for your deployment configuration.
Terraform Provider {#terraform-provider}
Configure the ClickHouse Terraform provider with API keys created in the ClickHouse Cloud console:
```terraform
terraform {
required_providers {
clickhouse = {
source = "ClickHouse/clickhouse"
version = "~> 2.0"
}
}
} | {"source_file": "production-readiness.md"} | [
-0.038399435579776764,
-0.03915783017873764,
-0.03433489054441452,
0.002124691614881158,
-0.03922635689377785,
0.05042366683483124,
0.06435853242874146,
-0.07273701578378677,
0.03991544619202614,
0.028740284964442253,
0.010093816556036472,
0.04652152583003044,
0.02864413894712925,
0.044742... |
d0c1eb35-22e5-4a09-a28d-458ac99bcc5b | ```terraform
terraform {
required_providers {
clickhouse = {
source = "ClickHouse/clickhouse"
version = "~> 2.0"
}
}
}
provider "clickhouse" {
environment = "production"
organization_id = var.organization_id
token_key = var.token_key
token_secret = var.token_secret
}
```
The Terraform provider supports service provisioning, IP access lists, and user management. Note that the provider does not currently support importing existing services or explicit backup configuration. For features not covered by the provider, manage them through the console or contact ClickHouse support.
For comprehensive examples including service configuration and network access controls, see
Terraform example on how to use Cloud API
.
Cloud API integration {#cloud-api-integration}
Organizations with existing automation frameworks can integrate ClickHouse Cloud management directly through the Cloud API. The API provides programmatic access to service lifecycle management, user administration, backup operations, and monitoring data retrieval.
Common API integration patterns:
- Custom provisioning workflows integrated with internal ticketing systems
- Automated scaling adjustments based on application deployment schedules
- Programmatic backup validation and reporting for compliance workflows
- Integration with existing infrastructure management platforms
API authentication uses the same token-based approach as Terraform. For complete API reference and integration examples, see
ClickHouse Cloud API
documentation.
Monitoring and operational integration {#monitoring-integration}
Connecting ClickHouse Cloud to your existing monitoring infrastructure ensures visibility and proactive issue detection.
Built-in monitoring {#built-in-monitoring}
ClickHouse Cloud provides an advanced dashboard with real-time metrics including queries per second, memory usage, CPU usage, and storage rates. Access via Cloud console under Monitoring β Advanced dashboard. Create custom dashboards tailored to specific workload patterns or team resource consumption.
:::note Common production gaps
Lack of proactive alerting integration with enterprise incident management systems and automated cost monitoring. Built-in dashboards provide visibility but automated alerting requires external integration.
:::
Production alerting setup {#production-alerting}
Built-in Capabilities
: ClickHouse Cloud provides notifications for billing events, scaling events, and service health via email, UI, and Slack. Configure delivery channels and notification severities through the console notification settings.
Enterprise Integration
: For advanced alerting (PagerDuty, custom webhooks), use the Prometheus endpoint to export metrics to your existing monitoring infrastructure: | {"source_file": "production-readiness.md"} | [
-0.0857505351305008,
-0.05232205614447594,
0.02370416559278965,
0.004404740873724222,
-0.020192550495266914,
-0.022815585136413574,
-0.004026081878691912,
-0.035406824201345444,
-0.05008550360798836,
0.07954800128936768,
-0.04591618850827217,
-0.02483774907886982,
0.04891762509942055,
-0.0... |
5ef85eab-3ec7-4c3b-9353-56cbf73def2a | Enterprise Integration
: For advanced alerting (PagerDuty, custom webhooks), use the Prometheus endpoint to export metrics to your existing monitoring infrastructure:
yaml
scrape_configs:
- job_name: "clickhouse"
static_configs:
- targets: ["https://api.clickhouse.cloud/v1/organizations/<org_id>/prometheus"]
basic_auth:
username: <KEY_ID>
password: <KEY_SECRET>
For comprehensive setup including detailed Prometheus/Grafana configuration and advanced alerting, see the
ClickHouse Cloud Observability Guide
.
Business continuity and support integration {#business-continuity}
Establishing backup validation procedures and support integration ensures your ClickHouse Cloud deployment can recover from incidents and access help when needed.
Backup strategy assessment {#backup-strategy}
ClickHouse Cloud provides automatic backups with configurable retention periods. Assess your current backup configuration against compliance and recovery requirements. Enterprise customers with specific compliance requirements around backup location or encryption can configure ClickHouse Cloud to store backups in their own cloud storage buckets (BYOB). Contact ClickHouse support for BYOB configuration.
Validate and test recovery procedures {#validate-test-recovery}
Most organizations discover backup gaps during actual recovery scenarios. Establish regular validation cycles to verify backup integrity and test recovery procedures before incidents occur. Schedule periodic test restorations to non-production environments, document step-by-step recovery procedures including time estimates, verify restored data completeness and application functionality, and test recovery procedures with different failure scenarios (service deletion, data corruption, regional outages). Maintain updated recovery runbooks accessible to on-call teams.
Test backup restoration at least quarterly for critical production services. Organizations with strict compliance requirements may need monthly or even weekly validation cycles.
Disaster recovery planning {#disaster-recovery-planning}
Document your recovery time objectives (RTO) and recovery point objectives (RPO) to validate that your current backup configuration meets business requirements. Establish regular testing schedules for backup restoration and maintain updated recovery documentation.
Cross-region backup storage
: Organizations with geographic disaster recovery requirements can configure ClickHouse Cloud to export backups to customer-owned storage buckets in alternate regions. This provides protection against regional outages but requires manual restoration procedures. Contact ClickHouse support to implement cross-region backup exports. Future platform releases will provide automated multi-region replication capabilities.
Production support integration {#production-support} | {"source_file": "production-readiness.md"} | [
-0.07733186334371567,
-0.005060562863945961,
-0.04906568303704262,
-0.0035604527220129967,
0.021315287798643112,
-0.042656801640987396,
0.007188142742961645,
-0.04954065382480621,
-0.003573652124032378,
0.009539137594401836,
-0.000480481656268239,
-0.027676468715071678,
0.043137550354003906,... |
00b16cab-5f6e-4404-a7b7-5f07729280ae | Production support integration {#production-support}
Understand your current support tier's SLA expectations and escalation procedures. Create internal runbooks defining when to engage ClickHouse support and integrate these procedures with your existing incident management processes.
Learn more about
ClickHouse Cloud backup and recovery
and
support services
.
Next steps {#next-steps}
After implementing the integrations and procedures in this guide, visit the
Cloud resource tour
for guides on
monitoring
,
security
, and
cost optimization
.
When current
service tier limitations
impact your production operations, consider upgrade paths for enhanced capabilities such as
private networking
,
TDE/CMEK
(Transparent Data Encryption with Customer-Managed Encryption Keys), or
advanced backup options
. | {"source_file": "production-readiness.md"} | [
-0.07315739244222641,
-0.026765797287225723,
0.016891324892640114,
-0.0518624372780323,
0.0008695268770679832,
0.0038296265993267298,
-0.025534721091389656,
0.018944328650832176,
-0.057388730347156525,
0.024068918079137802,
0.03304233402013779,
0.019722945988178253,
0.0404517762362957,
-0.... |
9c92f77f-89a7-4588-911a-59aa9e157d17 | slug: /cloud/guides
title: 'Guides'
hide_title: true
description: 'Table of contents page for the ClickHouse Cloud guides section'
doc_type: 'landing-page'
keywords: ['cloud guides', 'documentation', 'how-to', 'cloud features', 'tutorials'] | {"source_file": "index.md"} | [
0.014579405076801777,
-0.012441005557775497,
0.0316622257232666,
0.0034299257677048445,
0.06537975370883942,
-0.024051444604992867,
-0.005450394935905933,
-0.03096034936606884,
0.0008595176041126251,
0.030962703749537468,
0.03591134026646614,
0.034783754497766495,
0.04350407421588898,
-0.0... |
57b1f959-e8d0-4039-8973-6ab058ab89ef | | Page | Description |
|-----|-----|
|
Overview
| Provides an overview of backups in ClickHouse Cloud |
|
Take a backup or restore a backup from the UI
| Page describing how to take a backup or restore a backup from the UI with your own bucket |
|
Take a backup or restore a backup using commands
| Page describing how to take a backup or restore a backup with your own bucket using commands |
|
Accessing S3 data securely
| This article demonstrates how ClickHouse Cloud customers can leverage role-based access to authenticate with Amazon Simple Storage Service(S3) and access their data securely. |
|
Architecture
| Deploy ClickHouse on your own cloud infrastructure |
|
AWS PrivateLink
| This document describes how to connect to ClickHouse Cloud using AWS PrivateLink. |
|
Azure Private Link
| How to set up Azure Private Link |
|
BYOC on AWS FAQ
| Deploy ClickHouse on your own cloud infrastructure |
|
BYOC on AWS Observability
| Deploy ClickHouse on your own cloud infrastructure |
|
BYOC Onboarding for AWS
| Deploy ClickHouse on your own cloud infrastructure |
|
BYOC security playbook
| This page illustrates methods customers can use to identify potential security events |
|
ClickHouse Cloud production readiness guide
| Guide for organizations transitioning from quick start to enterprise-ready ClickHouse Cloud deployments |
|
ClickHouse Government
| Overview of ClickHouse Government offering |
|
ClickHouse Private
| Overview of ClickHouse Private offering |
|
Cloud Compatibility
| This guide provides an overview of what to expect functionally and operationally in ClickHouse Cloud. |
|
Cloud IP addresses
| This page documents the Cloud Endpoints API security features within ClickHouse. It details how to secure your ClickHouse deployments by managing access through authentication and authorization mechanisms. |
|
Common access management queries
| This article shows the basics of defining SQL users and roles and applying those privileges and permissions to databases, tables, rows, and columns. |
|
Configure backup schedules
| Guide showing how to configure backups |
|
Console audit log
| This page describes how users can review the cloud audit log |
|
Data encryption
| Learn more about data encryption in ClickHouse Cloud |
|
Data masking in ClickHouse
| A guide to data masking in ClickHouse |
|
Database audit log
| This page describes how users can review the database audit log |
|
Export Backups to your Own Cloud Account
| Describes how to export backups to your own Cloud account |
|
Gather your connection details
| Gather your connection details |
|
GCP private service connect
| This document describes how to connect to ClickHouse Cloud using Google Cloud Platform (GCP) Private Service Connect (PSC), and how to disable access to your ClickHouse Cloud services from addresses other than GCP PSC addresses using ClickHouse Cloud IP access lists. |
|
HIPAA onboarding | {"source_file": "index.md"} | [
-0.057051364332437515,
-0.056781549006700516,
-0.05822133645415306,
0.0008354062447324395,
0.04678843542933464,
0.06791266798973083,
0.03272370249032974,
-0.015226737596094608,
-0.019732890650629997,
0.044967446476221085,
0.02807503007352352,
0.02794785611331463,
0.10887893289327621,
-0.09... |
bf14bf8b-b6c6-438c-ba7a-c47a28b9378f | |
HIPAA onboarding
| Learn more about how to onboard to HIPAA compliant services |
|
Manage cloud users
| This page describes how administrators can add users, manage assignments, and remove users |
|
Manage database users
| This page describes how administrators can add database users, manage assignments, and remove database users |
|
Manage my account
| This page describes how users can accept invitations, manage MFA settings, and reset passwords |
|
Manage SQL console role assignments
| Guide showing how to manage SQL console role assignments |
|
Multi tenancy
| Best practices to implement multi tenancy |
|
Overview
| Deploy ClickHouse on your own cloud infrastructure |
|
PCI onboarding
| Learn more about how to onboard to PCI compliant services |
|
Query API Endpoints
| Easily spin up REST API endpoints from your saved queries |
|
SAML SSO setup
| How to set up SAML SSO with ClickHouse Cloud |
|
Setting IP filters
| This page explains how to set IP filters in ClickHouse Cloud to control access to ClickHouse services. |
|
Usage limits
| Describes the recommended usage limits in ClickHouse Cloud | | {"source_file": "index.md"} | [
0.03494348004460335,
-0.05690538138151169,
-0.023549364879727364,
0.00027871172642335296,
-0.03621944040060043,
0.06821494549512863,
0.025030508637428284,
-0.007784719113260508,
-0.07779461145401001,
0.03899456560611725,
-0.012335937470197678,
0.04916604980826378,
0.04697110503911972,
-0.0... |
1e8bba2e-387d-41a6-a5ff-4829758c6553 | slug: /whats-new/cloud-compatibility
sidebar_label: 'Cloud compatibility'
title: 'Cloud Compatibility'
description: 'This guide provides an overview of what to expect functionally and operationally in ClickHouse Cloud.'
keywords: ['ClickHouse Cloud', 'compatibility']
doc_type: 'guide'
ClickHouse Cloud compatibility guide
This guide provides an overview of what to expect functionally and operationally in ClickHouse Cloud. While ClickHouse Cloud is based on the open-source ClickHouse distribution, there may be some differences in architecture and implementation. You may find this blog on
how we built ClickHouse Cloud
interesting and relevant to read as background.
ClickHouse Cloud architecture {#clickhouse-cloud-architecture}
ClickHouse Cloud significantly simplifies operational overhead and reduces the costs of running ClickHouse at scale. There is no need to size your deployment upfront, set up replication for high availability, manually shard your data, scale up your servers when your workload increases, or scale them down when you are not using them β we handle this for you.
These benefits come as a result of architectural choices underlying ClickHouse Cloud:
- Compute and storage are separated and thus can be automatically scaled along separate dimensions, so you do not have to over-provision either storage or compute in static instance configurations.
- Tiered storage on top of object store and multi-level caching provides virtually limitless scaling and good price/performance ratio, so you do not have to size your storage partition upfront and worry about high storage costs.
- High availability is on by default and replication is transparently managed, so you can focus on building your applications or analyzing your data.
- Automatic scaling for variable continuous workloads is on by default, so you don't have to size your service upfront, scale up your servers when your workload increases, or manually scale down your servers when you have less activity
- Seamless hibernation for intermittent workloads is on by default. We automatically pause your compute resources after a period of inactivity and transparently start it again when a new query arrives, so you don't have to pay for idle resources.
- Advanced scaling controls provide the ability to set an auto-scaling maximum for additional cost control or an auto-scaling minimum to reserve compute resources for applications with specialized performance requirements.
Capabilities {#capabilities}
ClickHouse Cloud provides access to a curated set of capabilities in the open source distribution of ClickHouse. Tables below describe some features that are disabled in ClickHouse Cloud at this time.
DDL syntax {#ddl-syntax} | {"source_file": "cloud-compatibility.md"} | [
-0.02654537931084633,
-0.06415451318025589,
0.038191720843315125,
-0.04683620482683182,
0.006137727294117212,
-0.04515484720468521,
-0.07236562669277191,
-0.03537113219499588,
-0.017971495166420937,
0.03915015235543251,
0.016299784183502197,
0.03828185424208641,
-0.008201008662581444,
-0.0... |
65e2b3f3-7653-4d31-a9aa-c244b230b44d | DDL syntax {#ddl-syntax}
For the most part, the DDL syntax of ClickHouse Cloud should match what is available in self-managed installs. A few notable exceptions:
- Support for
CREATE AS SELECT
, which is currently not available. As a workaround, we suggest using
CREATE ... EMPTY ... AS SELECT
and then inserting into that table (see
this blog
for an example).
- Some experimental syntax may be disabled, for instance,
ALTER TABLE ... MODIFY QUERY
statement.
- Some introspection functionality may be disabled for security purposes, for example, the
addressToLine
SQL function.
- Do not use
ON CLUSTER
parameters in ClickHouse Cloud - these are not needed. While these are mostly no-op functions, they can still cause an error if you are trying to use
macros
. Macros often do not work and are not needed in ClickHouse Cloud.
Database and table engines {#database-and-table-engines}
ClickHouse Cloud provides a highly-available, replicated service by default. As a result, all database and table engines are "Replicated". You do not need to specify "Replicated"βfor example,
ReplicatedMergeTree
and
MergeTree
are identical when used in ClickHouse Cloud.
Supported table engines
ReplicatedMergeTree (default, when none is specified)
ReplicatedSummingMergeTree
ReplicatedAggregatingMergeTree
ReplicatedReplacingMergeTree
ReplicatedCollapsingMergeTree
ReplicatedVersionedCollapsingMergeTree
MergeTree (converted to ReplicatedMergeTree)
SummingMergeTree (converted to ReplicatedSummingMergeTree)
AggregatingMergeTree (converted to ReplicatedAggregatingMergeTree)
ReplacingMergeTree (converted to ReplicatedReplacingMergeTree)
CollapsingMergeTree (converted to ReplicatedCollapsingMergeTree)
VersionedCollapsingMergeTree (converted to ReplicatedVersionedCollapsingMergeTree)
URL
View
MaterializedView
GenerateRandom
Null
Buffer
Memory
Deltalake
Hudi
MySQL
MongoDB
NATS
RabbitMQ
PostgreSQL
S3
Interfaces {#interfaces}
ClickHouse Cloud supports HTTPS, native interfaces, and the
MySQL wire protocol
. Support for more interfaces such as Postgres is coming soon.
Dictionaries {#dictionaries}
Dictionaries are a popular way to speed up lookups in ClickHouse. ClickHouse Cloud currently supports dictionaries from PostgreSQL, MySQL, remote and local ClickHouse servers, Redis, MongoDB and HTTP sources.
Federated queries {#federated-queries}
We support federated ClickHouse queries for cross-cluster communication in the cloud, and for communication with external self-managed ClickHouse clusters. ClickHouse Cloud currently supports federated queries using the following integration engines:
- Deltalake
- Hudi
- MySQL
- MongoDB
- NATS
- RabbitMQ
- PostgreSQL
- S3
Federated queries with some external database and table engines, such as SQLite, ODBC, JDBC, Redis, HDFS and Hive are not yet supported.
User defined functions {#user-defined-functions} | {"source_file": "cloud-compatibility.md"} | [
-0.009802636690437794,
-0.10799678415060043,
0.021297592669725418,
0.051776185631752014,
-0.04534745216369629,
-0.017282305285334587,
0.0240177009254694,
-0.06385920941829681,
0.005917564034461975,
0.028313670307397842,
0.05498654022812843,
-0.0686112716794014,
0.08099966496229172,
-0.0846... |
6191e50e-7e8d-402c-846f-65f7d87aa1c6 | Federated queries with some external database and table engines, such as SQLite, ODBC, JDBC, Redis, HDFS and Hive are not yet supported.
User defined functions {#user-defined-functions}
User-defined functions are a recent feature in ClickHouse. ClickHouse Cloud currently supports SQL UDFs only.
Experimental features {#experimental-features}
Experimental features are disabled in ClickHouse Cloud services to ensure the stability of service deployments.
Kafka {#kafka}
The
Kafka Table Engine
is not generally available in ClickHouse Cloud. Instead, we recommend relying on architectures that decouple the Kafka connectivity components from the ClickHouse service to achieve a separation of concerns. We recommend
ClickPipes
for pulling data from a Kafka stream. Alternatively, consider the push-based alternatives listed in the
Kafka User Guide
.
Named collections {#named-collections}
Named collections
are not currently supported in ClickHouse Cloud.
Operational defaults and considerations {#operational-defaults-and-considerations}
The following are default settings for ClickHouse Cloud services. In some cases, these settings are fixed to ensure the correct operation of the service, and in others, they can be adjusted.
Operational limits {#operational-limits}
max_parts_in_total: 10,000
{#max_parts_in_total-10000}
The default value of the
max_parts_in_total
setting for MergeTree tables has been lowered from 100,000 to 10,000. The reason for this change is that we observed that a large number of data parts is likely to cause a slow startup time of services in the cloud. A large number of parts usually indicate a choice of too granular partition key, which is typically done accidentally and should be avoided. The change of default will allow the detection of these cases earlier.
max_concurrent_queries: 1,000
{#max_concurrent_queries-1000}
Increased this per-server setting from the default of
100
to
1000
to allow for more concurrency.
This will result in
number of replicas * 1,000
concurrent queries for the offered tier services.
1000
concurrent queries for Basic tier service limited to a single replica and
1000+
for Scale and Enterprise,
depending on the number of replicas configured.
max_table_size_to_drop: 1,000,000,000,000
{#max_table_size_to_drop-1000000000000}
Increased this setting from 50GB to allow for dropping of tables/partitions up to 1TB.
System settings {#system-settings}
ClickHouse Cloud is tuned for variable workloads, and for that reason most system settings are not configurable at this time. We do not anticipate the need to tune system settings for most users, but if you have a question about advanced system tuning, please contact ClickHouse Cloud Support.
Advanced security administration {#advanced-security-administration} | {"source_file": "cloud-compatibility.md"} | [
-0.045739974826574326,
-0.11648429185152054,
-0.06184781342744827,
0.030506538227200508,
-0.0646861270070076,
-0.06479427963495255,
-0.013550922274589539,
-0.03807321935892105,
-0.01564614474773407,
0.05794447660446167,
-0.05712532624602318,
-0.026609739288687706,
0.028380895033478737,
-0.... |
8039365e-eba1-4633-8ae9-b14398f14428 | Advanced security administration {#advanced-security-administration}
As part of creating the ClickHouse service, we create a default database, and the default user that has broad permissions to this database. This initial user can create additional users and assign their permissions to this database. Beyond this, the ability to enable the following security features within the database using Kerberos, LDAP, or SSL X.509 certificate authentication are not supported at this time.
Roadmap {#roadmap}
We are introducing support for executable UDFs in the Cloud and evaluating demand for many other features. If you have feedback and would like to ask for a specific feature, please
submit it here
. | {"source_file": "cloud-compatibility.md"} | [
-0.015553956851363182,
-0.0627061054110527,
-0.07303152233362198,
-0.04240452125668526,
-0.052723873406648636,
-0.005719389766454697,
0.06116317957639694,
-0.014024877920746803,
-0.10743656009435654,
0.04253216087818146,
0.003727583447471261,
-0.027623778209090233,
0.0803234875202179,
0.06... |
28996a43-0a11-478d-bdb0-fc4525ebb742 | slug: /cloud/data-resiliency
sidebar_label: 'Data resiliency'
title: 'Disaster recovery'
description: 'This guide provides an overview of disaster recovery.'
doc_type: 'reference'
keywords: ['ClickHouse Cloud', 'data resiliency', 'disaster recovery']
import Image from '@theme/IdealImage';
import restore_backup from '@site/static/images/cloud/guides/restore_backup.png';
Data resiliency {#clickhouse-cloud-data-resiliency}
This page covers the disaster recovery recommendations for ClickHouse Cloud, and guidance for customers to recover from an outage.
ClickHouse Cloud does not currently support automatic failover, or automatic syncing across multiple geographical regions.
:::tip
Customers should perform periodic backup restore testing to understand the specific RTO for their service size and configuration.
:::
Definitions {#definitions}
It is helpful to cover some definitions first.
RPO (Recovery Point Objective)
: The maximum acceptable data loss measured in time following a disruptive event. Example: An RPO of 30 mins means that in the event of a failure the DB should be restorable to data no older than 30 mins. This, of course, depends on how frequently backups are taken.
RTO (Recovery Time Objective)
: The maximum allowable downtime before normal operations must resume following an outage. Example: An RTO of 30 mins means that in the event of a failure, the team is able to restore data and applications and get normal operations going within 30 mins.
Database Backups and Snapshots
: Backups provide durable long-term storage with a separate copy of the data. Snapshots do not create an additional copy of the data, are usually faster, and provide better RPOs.
Database backups {#database-backups}
Having a backup of your primary service is an effective way to utilize the backup and restore from it in the event of primary service downtime.
ClickHouse Cloud supports the following capabilities for backups.
Default backups
By default, ClickHouse Cloud takes a
backup
of your service every 24 hours.
These backups are in the same region as the service, and happen in the ClickHouse CSP (cloud service provider) storage bucket.
In the event that the data in the primary service gets corrupted, the backup can be used to restore to a new service.
External backups (in the customer's own storage bucket)
Enterprise Tier customers can
export backups
to their object storage in their own account, in the same region, or in another region.
Cross-cloud backup export support is coming soon.
Applicable data transfer charges will apply for cross-region, and cross-cloud backups.
:::note
This feature is not currently available in PCI/ HIPAA services
:::
Configurable backups
Customers can
configure backups
to happen at a higher frequency, up to every 6 hours, to improve the RPO.
Customers can also configure longer retention. | {"source_file": "data-resiliency.md"} | [
-0.049941278994083405,
-0.04639247804880142,
0.03201298788189888,
0.024728652089834213,
0.05665262043476105,
-0.03385826572775841,
0.0069264755584299564,
0.03229454532265663,
-0.058926772326231,
0.027537671849131584,
0.04368910565972328,
0.08669406920671463,
0.10180292278528214,
-0.0275702... |
467bbfd7-89b8-4f15-a003-a82c1790e2b1 | Configurable backups
Customers can
configure backups
to happen at a higher frequency, up to every 6 hours, to improve the RPO.
Customers can also configure longer retention.
The backups currently available for the service are listed on the βbackupsβ page on the ClickHouse Cloud console.
This section also provides the success / failure status for each backup.
Restoring from a Backup {#restoring-from-a-backup}
Default backups, in the ClickHouse Cloud bucket, can be restored to a new service in the same region.
External backups (in customer object storage) can be restored to a new service in the same or different region.
Backup and restore duration guidance {#backup-and-restore-duration-guidance}
Backup and restore durations depend on several factors such as the size of the database as well as the schema and the number of tables in the database.
In our testing, we have seen smaller backups ~1 TB take up to 10-15 minutes or longer to back up.
Backups less than 20 TB usually complete within an hour, and backing up ~50 TB of data should take 2-3 hours.
Backups get economies of scale at larger sizes, and we have seen backups of up to 1 PB for some internal services complete in under 10 hours.
We recommend testing with your own database or sample data to get better estimates as the actual duration depends on several factors as outlined above.
Restore durations are similar to back up durations for similar size.
As mentioned above, we recommend testing with your own database to have an idea of how long it will take to restore the backup.
:::note
There is currently NO support for automatic failover between 2 ClickHouse Cloud instances whether in the same or different region.
There is currently NO automatic syncing of data between different ClickHouse Cloud services in the same or different regions .i.e. Active-Active replication
:::
Recovery process {#recovery-process}
This section explains the various recovery options and the process that can be followed in each case.
Primary service data corruption {#primary-service-data-corruption}
In this case, the data
can be restored
from the backup to another service in the same region.
The backup could be up to 24 hours old if using the default backup policy, or up to 6 hours old (if using configurable backups with 6 hours frequency).
Restoration steps {#restoration-steps}
To restore from an existing backup
Go to the βBackupsβ section of the ClickHouse Cloud console.
Click on the three dots under βActionsβ for the specific backup that you want to restore from.
Give the new service a specific name and restore from this backup
Primary region downtime {#primary-region-downtime}
Customers in the Enterprise Tier can
export backups
to their own cloud provider bucket.
If you are concerned about regional failures, we recommend exporting backups to a different region.
Keep in mind that cross-region data transfer charges will apply. | {"source_file": "data-resiliency.md"} | [
-0.06705576181411743,
-0.04835948348045349,
0.015370880253612995,
0.024729732424020767,
0.018291352316737175,
0.003646469907835126,
-0.06998331844806671,
-0.03224533051252365,
-0.018719982355833054,
0.01173426304012537,
-0.0014436867786571383,
0.08222769945859909,
0.060917869210243225,
-0.... |
78fdcf4c-ac49-40b0-9204-c0fa8d0336e8 | If the primary region goes down, the backup in another region can be restored to a new service in a different region.
Once the backup has been restored to another service, you will need to ensure that any DNS, load balancer, or connection string configurations are updated to point to the new service.
This may involve:
Updating environment variables or secrets
Restarting application services to establish new connections
:::note
Backup / restore to an external bucket is currently not supported for services utilizing
Transparent Data Encryption (TDE)
.
:::
Additional options {#additional-options}
There are some additional options to consider.
Dual-writing to separate clusters
In this option, you can set up 2 separate clusters in different regions and dual-write to both.
This option inherently comes with a higher cost as it involves running multiple services but provides higher availability in case of one region being unavailable.
Utilize CSP replication
With this option, you would utilize the cloud service provider's native object storage replication to copy data over.
For instance, with BYOB you can export the backup to a bucket that you own in the primary region, and have that replicated over to another region using
AWS cross region replication
. | {"source_file": "data-resiliency.md"} | [
-0.047407981008291245,
-0.0352436862885952,
0.03426723927259445,
-0.011732121929526329,
-0.049006324261426926,
0.005620765034109354,
-0.06787268817424774,
-0.034918077290058136,
0.02589806169271469,
0.038333114236593246,
-0.08168625086545944,
0.015447684563696384,
0.0787452831864357,
-0.06... |
790d1565-2314-4d29-abcb-c70c3e2ef042 | sidebar_label: 'Account closure'
slug: /cloud/manage/close_account
title: 'Account closure and deletion'
description: 'We know there are circumstances that sometimes necessitate account closure. This guide will help you through the process.'
keywords: ['ClickHouse Cloud', 'account closure', 'delete account', 'cloud account management', 'account deletion']
doc_type: 'guide'
Account closure and deletion {#account-close--deletion}
Our goal is to help you be successful in your project. If you have questions that are not answered on this site or need help evaluating a
unique use case, please contact us at
support@clickhouse.com
.
We know there are circumstances that sometimes necessitate account closure. This guide will help you through the process.
Close versus delete your account {#close-vs-delete}
Customers may log back into closed accounts to view usage, billing and account-level activity logs. This enables you to easily access
details that are useful for a variety of purposes, from documenting use cases to downloading invoices at the end of the year for tax purposes.
You will also continue receiving product updates so that you know if a feature you may have been waiting for is now available. Additionally,
closed accounts may be reopened at any time to start new services.
Customers requesting personal data deletion should be aware this is an irreversible process. The account and related information will no longer
be available. You will not receive product updates and may not reopen the account. This will not affect any newsletter subscriptions.
Newsletter subscribers can unsubscribe at any time by using the unsubscribe link at the bottom of the newsletter email without closing their account or
deleting their information.
Preparing for closure {#preparing-for-closure}
Before requesting account closure, please take the following steps to prepare the account.
1. Export any data from your service that you need to keep.
2. Stop and delete your services. This will keep additional charges from accruing on your account.
3. Remove all users except the admin that will request closure. This will help you ensure no new services are created while the process completes.
4. Review the 'Usage' and 'Billing' tabs in the control panel to verify all charges have been paid. We are not able to close accounts with unpaid balances.
Request an account closure {#request-account-closure}
We are required to authenticate requests for both closure and deletion. To ensure your request can be processed quickly, please follow the steps outlined
below.
1. Sign into your clickhouse.cloud account.
2. Complete any remaining steps in the
Preparing for Closure
section above.
3. Click the Help button (question mark in the upper right corner of the screen).
4. Under 'Support' click 'Create case.'
5. In the 'Create new case' screen, enter the following: | {"source_file": "11_account-close.md"} | [
-0.04594867303967476,
-0.027900487184524536,
0.00527314655482769,
-0.010274396277964115,
0.031109938398003578,
0.005315354559570551,
0.07597231864929199,
-0.06750760972499847,
0.03878984600305557,
0.011305997148156166,
0.059184540063142776,
0.02133897691965103,
0.006199889350682497,
-0.075... |
5f60264f-aa97-4cfa-94e9-0c6a7b5f9051 | text
Priority: Severity 3
Subject: Please close my ClickHouse account
Description: We would appreciate it if you would share a brief note about why you are cancelling.
Click 'Create new case'
We will close your account and send a confirmation email to let you know when it is complete.
Request deletion of your personal data {#request-personal-data-deletion}
Please note, only account administrators may request personal data deletion from ClickHouse. If you are not an account administrator, please contact
your ClickHouse account administrator to request to be removed from the account.
To request data deletion, follow the steps in 'Request Account Closure' above. When entering the case information, change the subject to
'Please close my ClickHouse account and delete my personal data.'
We will complete personal data deletion requests within 30 days. | {"source_file": "11_account-close.md"} | [
-0.02448120340704918,
0.012163790874183178,
0.05955658107995987,
-0.01753261685371399,
0.030914846807718277,
-0.061216384172439575,
0.06904473900794983,
-0.0802459865808487,
0.03823312744498253,
0.03170541673898697,
0.03933101147413254,
-0.0012724248226732016,
0.02915126085281372,
-0.05144... |
1918ebf3-6315-42b9-b360-beb30e1d1233 | sidebar_label: 'Personal data access'
slug: /cloud/manage/personal-data-access
title: 'Personal data access'
description: 'As a registered user, ClickHouse allows you to view and manage your personal account data, including contact information.'
doc_type: 'reference'
keywords: ['ClickHouse Cloud', 'personal data', 'DSAR', 'data subject access request', 'privacy policy', 'GDPR']
import Image from '@theme/IdealImage';
import support_case_form from '@site/static/images/cloud/security/support-case-form.png';
Intro {#intro}
As a registered user, ClickHouse allows you to view and manage your personal account data, including contact information. Depending on your role, this may also include access to the contact information of other users in your organization, API key details, and other relevant information. You can manage these details directly through the ClickHouse console on a self-serve basis.
What is a Data Subject Access Request (DSAR)
Depending on where you are located, applicable law may also provide you additional rights as to personal data that ClickHouse holds about you (Data Subject Rights), as described in the ClickHouse Privacy Policy. The process for exercising Data Subject Rights is known as a Data Subject Access Request (DSAR).
Scope of Personal Data
Please review ClickHouse's Privacy Policy for details on personal data that ClickHouse collects and how it may be used.
Self service {#self-service}
By default, ClickHouse empowers users to view their personal data directly from the ClickHouse console.
Below is a summary of the data ClickHouse collects during account setup and service usage, along with information on where specific personal data can be viewed within the ClickHouse console.
| Location/URL | Description | Personal Data |
|-------------|----------------|-----------------------------------------|
| https://auth.clickhouse.cloud/u/signup/ | Account registration | email, password |
| https://console.clickhouse.cloud/profile | General user profile details | name, email |
| https://console.clickhouse.cloud/organizations/OrgID/members | List of users in an organization | name, email |
| https://console.clickhouse.cloud/organizations/OrgID/keys | List of API keys and who created them | email |
| https://console.clickhouse.cloud/organizations/OrgID/audit | Activity log, listing actions by individual users | email |
| https://console.clickhouse.cloud/organizations/OrgID/billing | Billing information and invoices | billing address, email |
| https://console.clickhouse.cloud/support | Interactions with ClickHouse Support | name, email |
Note: URLs with
OrgID
need to be updated to reflect the
OrgID
for your specific account.
Current customers {#current-customers} | {"source_file": "10_personal-data-access.md"} | [
-0.07527877390384674,
0.03654060140252113,
-0.0386180579662323,
-0.006691224407404661,
0.034795742481946945,
-0.0269350353628397,
0.06532168388366699,
0.034616634249687195,
-0.012228121049702168,
-0.03126313537359238,
-0.004663459025323391,
-0.008711221627891064,
0.028897108510136604,
-0.0... |
d81c80dc-10a9-421a-b194-187bb4d9fc02 | Note: URLs with
OrgID
need to be updated to reflect the
OrgID
for your specific account.
Current customers {#current-customers}
If you have an account with us and the self-service option has not resolved your personal data issue, you can submit a Data Subject Access Request under the Privacy Policy. To do so, log into your ClickHouse account and open a
support case
. This helps us verify your identity and streamline the process to address your request.
Please be sure to include the following details in your support case:
| Field | Text to include in your request |
|-------------|---------------------------------------------------|
| Subject | Data Subject Access Request (DSAR) |
| Description | Detailed description of the information you'd like ClickHouse to look for, collect, and/or provide. |
Individuals without an account {#individuals-without-an-account}
If you do not have an account with us and the self-service option above has not resolved your personal-data issue, and you wish to make a Data Subject Access Request pursuant to the Privacy Policy, you may submit these requests by email to
privacy@clickhouse.com
.
Identity verification {#identity-verification}
Should you submit a Data Subject Access Request through email, we may request specific information from you to help us confirm your identity and process your request. Applicable law may require or permit us to decline your request. If we decline your request, we will tell you why, subject to legal restrictions.
For more information, please review the
ClickHouse Privacy Policy | {"source_file": "10_personal-data-access.md"} | [
-0.06553016602993011,
0.021142395213246346,
-0.02840040810406208,
0.010498851537704468,
-0.02745690383017063,
-0.07245583087205887,
0.004664849489927292,
-0.06060858443379402,
-0.00384245696477592,
-0.006585087161511183,
-0.0009491638629697263,
-0.05957692861557007,
-0.005328059196472168,
... |
b058fcab-1265-4bd7-bb60-eb4e6dcf6762 | sidebar_label: 'Configuring settings'
slug: /manage/settings
title: 'Configuring settings'
description: 'How to configure settings for your ClickHouse Cloud service for a specific user or role'
keywords: ['ClickHouse Cloud', 'settings configuration', 'cloud settings', 'user settings', 'role settings']
doc_type: 'guide'
import Image from '@theme/IdealImage';
import cloud_settings_sidebar from '@site/static/images/cloud/manage/cloud-settings-sidebar.png';
Configuring settings
To specify settings for your ClickHouse Cloud service for a specific
user
or
role
, you must use
SQL-driven Settings Profiles
. Applying Settings Profiles ensures that the settings you configure persist, even when your services stop, idle, and upgrade. To learn more about Settings Profiles, please see
this page
.
Please note that XML-based Settings Profiles and
configuration files
are currently not supported for ClickHouse Cloud.
To learn more about the settings you can specify for your ClickHouse Cloud service, please see all possible settings by category in
our docs
. | {"source_file": "08_settings.md"} | [
-0.0039023421704769135,
-0.0508732907474041,
-0.01050445158034563,
0.018025420606136322,
-0.03975602984428406,
0.023776063695549965,
0.07056385278701782,
-0.026715518906712532,
-0.11442700773477554,
0.01685304008424282,
0.06274282932281494,
0.002003207802772522,
0.09777217358350754,
-0.030... |
3b250330-0a78-4afd-aa4f-625924c9efd2 | sidebar_label: 'Architecture'
slug: /cloud/reference/architecture
title: 'ClickHouse Cloud architecture'
description: 'This page describes the architecture of ClickHouse Cloud'
keywords: ['ClickHouse Cloud', 'cloud architecture', 'separation of storage and compute']
doc_type: 'reference'
import Image from '@theme/IdealImage';
import Architecture from '@site/static/images/cloud/reference/architecture.png';
ClickHouse Cloud architecture
Storage backed by object store {#storage-backed-by-object-store}
Virtually unlimited storage
No need to manually share data
Significantly lower price point for storing data, especially data that is accessed less frequently
Compute {#compute}
Automatic scaling and idling: No need to size up front, and no need to over-provision for peak use
Automatic idling and resume: No need to have unused compute running while no one is using it
Secure and HA by default
Administration {#administration}
Setup, monitoring, backups, and billing are performed for you.
Cost controls are enabled by default, and can be adjusted by you through the Cloud console.
Service isolation {#service-isolation}
Network isolation {#network-isolation}
All services are isolated at the network layer.
Compute isolation {#compute-isolation}
All services are deployed in separate pods in their respective Kubernetes spaces, with network level isolation.
Storage isolation {#storage-isolation}
All services use a separate subpath of a shared bucket (AWS, GCP) or storage container (Azure).
For AWS, access to storage is controlled via AWS IAM, and each IAM role is unique per service. For the Enterprise service,
CMEK
can be enabled to provide advanced data isolation at rest. CMEK is only supported for AWS services at this time.
For GCP and Azure, services have object storage isolation (all services have their own buckets or storage container).
Compute-compute separation {#compute-compute-separation}
Compute-compute separation
lets users create multiple compute node groups, each with their own service URL, that all use the same shared object storage. This allows for compute isolation of different use cases such as reads from writes, that share the same data. It also leads to more efficient resource utilization by allowing for independent scaling of the compute groups as needed.
Concurrency limits {#concurrency-limits}
There is no limit to the number of queries per second (QPS) in your ClickHouse Cloud service. There is, however, a limit of 1000 concurrent queries per replica. QPS is ultimately a function of your average query execution time and the number of replicas in your service.
A major benefit of ClickHouse Cloud compared to a self-managed ClickHouse instance or other databases/data warehouses is that you can easily increase concurrency by
adding more replicas (horizontal scaling)
. | {"source_file": "02_architecture.md"} | [
-0.025592656806111336,
0.03980983793735504,
-0.062203288078308105,
0.04718905687332153,
-0.01645977981388569,
-0.011493326164782047,
0.030758947134017944,
0.0033495109528303146,
-0.022478900849819183,
0.07677944749593735,
0.018187761306762695,
0.0027997447177767754,
0.06281115859746933,
-0... |
bf53959c-22d8-43fe-b295-98817975de81 | sidebar_label: 'Service uptime and SLA'
slug: /cloud/manage/service-uptime
title: 'Service uptime'
description: 'Users can now see regional uptimes on the status page and subscribe to alerts on service disruptions.'
keywords: ['ClickHouse Cloud', 'service uptime', 'SLA', 'cloud reliability', 'status monitoring']
doc_type: 'reference'
Uptime alerts {#uptime-alerts}
Users can now see regional uptimes on the
status page
and subscribe to alerts on service disruptions.
SLA {#sla}
We also offer Service Level Agreements for select committed spend contracts. Please contact us at
sales@clickhouse.com
to learn more about our SLA policy. | {"source_file": "06_service-uptime.md"} | [
-0.07781844586133957,
-0.018305379897356033,
-0.02743968367576599,
-0.012690989300608635,
0.03781105577945709,
0.006743673700839281,
0.06819438189268112,
-0.07392016053199768,
0.01712627150118351,
0.041978005319833755,
-0.013347265310585499,
0.05408093333244324,
-0.02022537961602211,
-0.01... |
c9dc3396-8d31-4d93-b11f-144a57dc1635 | slug: /cloud/reference
keywords: ['Cloud', 'reference', 'architecture', 'SharedMergeTree', 'Compute-compute Separation', 'Bring Your Own Cloud', 'Changelogs', 'Supported Cloud Regions', 'Cloud Compatibility']
title: 'Overview'
hide_title: true
description: 'Landing page for the Cloud reference section'
doc_type: 'landing-page'
Cloud reference
This section acts as a reference guide for some of the more technical details of ClickHouse Cloud and contains the following pages:
| Page | Description |
|-----------------------------------|-----------------------------------------------------------------------------------------------------------|
|
Architecture
| Discusses the architecture of ClickHouse Cloud, including storage, compute, administration, and security. |
|
SharedMergeTree
| Explainer on SharedMergeTree, the cloud-native replacement for the ReplicatedMergeTree and analogues. |
|
Warehouses
| Explainer on what Warehouses and compute-compute separation are in ClickHouse Cloud. |
|
BYOC (Bring Your Own Cloud)
| Explainer on the Bring Your Own Cloud (BYOC) service available with ClickHouse Cloud. |
|
Changelogs
| Cloud Changelogs and Release Notes. |
|
Cloud Compatibility
| A guide to what to expect functionally and operationally in ClickHouse Cloud. |
|
Supported Cloud Regions
| A list of the supported cloud regions for AWS, Google and Azure. | | {"source_file": "index.md"} | [
-0.018793713301420212,
-0.010983673855662346,
0.021503936499357224,
-0.001599659794010222,
0.04029802605509758,
0.01687304675579071,
-0.009775976650416851,
-0.08263514935970306,
0.0062585738487541676,
0.014344706200063229,
0.01084311492741108,
0.0007987202261574566,
0.022477446123957634,
-... |
35c62153-0e98-4770-85cf-f707809b531d | title: 'Supported cloud regions'
sidebar_label: 'Supported Cloud regions'
keywords: ['aws', 'gcp', 'google cloud', 'azure', 'cloud', 'regions']
description: 'Supported regions for ClickHouse Cloud'
slug: /cloud/reference/supported-regions
doc_type: 'reference'
import EnterprisePlanFeatureBadge from '@theme/badges/EnterprisePlanFeatureBadge'
Supported cloud regions
AWS regions {#aws-regions}
ap-northeast-1 (Tokyo)
ap-south-1 (Mumbai)
ap-southeast-1 (Singapore)
ap-southeast-2 (Sydney)
eu-central-1 (Frankfurt)
eu-west-1 (Ireland)
eu-west-2 (London)
me-central-1 (UAE)
us-east-1 (N. Virginia)
us-east-2 (Ohio)
us-west-2 (Oregon)
Private Region:
- ca-central-1 (Canada)
- af-south-1 (South Africa)
- eu-north-1 (Stockholm)
- sa-east-1 (South America)
- ap-northeast-2 (South Korea, Seoul)
Google Cloud regions {#google-cloud-regions}
asia-southeast1 (Singapore)
europe-west4 (Netherlands)
us-central1 (Iowa)
us-east1 (South Carolina)
Private Region:
us-west1 (Oregon)
australia-southeast1(Sydney)
asia-northeast1 (Tokyo)
europe-west3 (Frankfurt)
europe-west6 (Zurich)
northamerica-northeast1 (MontrΓ©al)
Azure regions {#azure-regions}
West US 3 (Arizona)
East US 2 (Virginia)
Germany West Central (Frankfurt)
Private Region:
JapanEast
:::note
Need to deploy to a region not currently listed?
Submit a request
.
:::
Private regions {#private-regions}
We offer Private regions for our Enterprise tier services. Please
Contact us
for private region requests.
Key considerations for private regions:
- Services will not auto-scale; however, manual vertical and horizontal scaling is supported.
- Services cannot be idled.
- Status page is not available for private regions.
Additional requirements may apply for HIPAA compliance (including signing a BAA). Note that HIPAA is currently available only for Enterprise tier services
HIPAA compliant regions {#hipaa-compliant-regions}
Customers must sign a Business Associate Agreement (BAA) and request onboarding through Sales or Support to set up services in HIPAA compliant regions. The following regions support HIPAA compliance:
- AWS af-south-1 (South Africa)
Private Region
- AWS ca-central-1 (Canada)
Private Region
- AWS eu-central-1 (Frankfurt)
- AWS eu-north-1 (Stockholm)
Private Region
- AWS eu-west-1 (Ireland)
- AWS eu-west-2 (London)
- AWS sa-east-1 (South America)
Private Region
- AWS us-east-1 (N. Virginia)
- AWS us-east-2 (Ohio)
- AWS us-west-2 (Oregon)
- GCP europe-west4 (Netherlands)
- GCP us-central1 (Iowa)
- GCP us-east1 (South Carolina)
PCI compliant regions {#pci-compliant-regions}
Customers must request onboarding through Sales or Support to set up services in PCI compliant regions. The following regions support PCI compliance:
- AWS af-south-1 (South Africa)
Private Region
- AWS ca-central-1 (Canada)
Private Region | {"source_file": "05_supported-regions.md"} | [
0.09138897061347961,
0.004297313746064901,
0.05260134115815163,
-0.06846918165683746,
0.04388461634516716,
0.023018106818199158,
-0.07616624236106873,
-0.11593487858772278,
-0.028278391808271408,
0.024846220389008522,
0.045611023902893066,
-0.04465216398239136,
-0.0297387782484293,
0.00660... |
fe896e80-20af-4c59-8399-39c580b90720 | - AWS ca-central-1 (Canada)
Private Region
- AWS eu-central-1 (Frankfurt)
- AWS eu-north-1 (Stockholm)
Private Region
- AWS eu-west-1 (Ireland)
- AWS eu-west-2 (London)
- AWS sa-east-1 (South America)
Private Region
- AWS us-east-1 (N. Virginia)
- AWS us-east-2 (Ohio)
- AWS us-west-2 (Oregon) | {"source_file": "05_supported-regions.md"} | [
0.14077991247177124,
-0.057596832513809204,
0.011371389031410217,
-0.03404873609542847,
0.07258111983537674,
0.04100498557090759,
-0.0406760573387146,
-0.0852789506316185,
0.0629509761929512,
-0.01711311750113964,
0.03516855463385582,
-0.08908442407846451,
0.019969269633293152,
-0.00895580... |
2e16cdda-f525-4b21-861c-34a517b5ac1b | sidebar_label: 'Upgrades'
slug: /manage/updates
title: 'Upgrades'
description: 'With ClickHouse Cloud you never have to worry about patching and upgrades. We roll out upgrades that include fixes, new features and performance improvements on a periodic basis.'
doc_type: 'guide'
keywords: ['upgrades', 'version management', 'cloud features', 'maintenance', 'updates']
import Image from '@theme/IdealImage';
import EnterprisePlanFeatureBadge from '@theme/badges/EnterprisePlanFeatureBadge'
import ScalePlanFeatureBadge from '@theme/badges/ScalePlanFeatureBadge'
import fast_release from '@site/static/images/cloud/manage/fast_release.png';
import enroll_fast_release from '@site/static/images/cloud/manage/enroll_fast_release.png';
import scheduled_upgrades from '@site/static/images/cloud/manage/scheduled_upgrades.png';
import scheduled_upgrade_window from '@site/static/images/cloud/manage/scheduled_upgrade_window.png';
Upgrades
With ClickHouse Cloud you never have to worry about patching and upgrades. We roll out upgrades that include fixes, new features and performance improvements on a periodic basis. For the full list of what is new in ClickHouse refer to our
Cloud changelog
.
:::note
We are introducing a new upgrade mechanism, a concept we call "make before break" (or MBB). With this new approach, we add updated replica(s) before removing the old one(s) during the upgrade operation. This results in more seamless upgrades that are less disruptive to running workloads.
As part of this change, historical system table data will be retained for up to a maximum of 30 days as part of upgrade events. In addition, any system table data older than December 19, 2024, for services on AWS or GCP and older than January 14, 2025, for services on Azure will not be retained as part of the migration to the new organization tiers.
:::
Version compatibility {#version-compatibility}
When you create a service, the
compatibility
setting is set to the most up-to-date ClickHouse version offered on ClickHouse Cloud at the time your service is initially provisioned.
The
compatibility
setting allows you to use default values of settings from previous versions. When your service is upgraded to a new version, the version specified for the
compatibility
setting does not change. This means that default values for settings that existed when you first created your service will not change (unless you have already overridden those default values, in which case they will persist after the upgrade).
You cannot manage the service-level default
compatibility
setting for your service. You must
contact support
if you would like to change the version set for your service's default
compatibility
setting. However, you can override the
compatibility
setting at the user, role, profile, query, or session level using standard ClickHouse setting mechanisms such as
SET compatibility = '22.3'
in a session or
SETTINGS compatibility = '22.3'
in a query. | {"source_file": "upgrades.md"} | [
-0.012208775617182255,
0.06588397920131683,
0.022374194115400314,
0.009336711838841438,
0.006277649197727442,
0.03702451288700104,
-0.006853983737528324,
-0.0385904386639595,
-0.07711513340473175,
0.053665660321712494,
0.10609140247106552,
0.006614776328206062,
0.014793671667575836,
-0.095... |
436d2c49-d173-46d2-8014-6d924cf8bf2a | Maintenance mode {#maintenance-mode}
At times, it may be necessary for us to update your service, which could require us to disable certain features such as scaling or idling. In rare cases, we may need to take action on a service that is experiencing issues and bring it back to a healthy state. During such maintenance, you will see a banner on the service page that says
"Maintenance in progress"
. You may still be able to use the service for queries during this time.
You will not be charged for the time that the service is under maintenance.
Maintenance mode
is a rare occurrence and should not be confused with regular service upgrades.
Release channels (upgrade schedule) {#release-channels-upgrade-schedule}
Users are able to specify the upgrade schedule for their ClickHouse Cloud service by subscribing to a specific release channel. There are three release channels, and the user has the ability to configure the day and time of the week for upgrades with the
scheduled upgrades
feature.
The three release channels are:
- The
fast release channel
for early access to upgrades.
- The
regular release channel
is the default, and upgrades on this channel start two weeks after the fast release channel upgrades. If your service on the Scale and Enterprise tier does not have a release channel set, it is on the regular release channel by default.
- The
slow release channel
is for deferred release. Upgrades on this channel occur two weeks after the regular release channel upgrades.
:::note
Basic tier services are automatically enlisted to the fast release channel
:::
Fast release channel (early upgrades) {#fast-release-channel-early-upgrades}
Besides the regular upgrade schedule, we offer a
Fast release
channel if you would like your services to receive updates ahead of the regular release schedule.
Specifically, services will:
Receive the latest ClickHouse releases
More frequent upgrades as new releases are tested
You can modify the release schedule of the service in the Cloud console as shown below:
This
Fast release
channel is suitable for testing new features in non-critical environments.
It is not recommended for production workloads with strict uptime and reliability requirements.
Regular release channel {#regular-release-channel}
For all Scale and Enterprise tier services that do not have a release channel or an upgrade schedule configured, upgrades will be performed as a part of the Regular channel release. This is recommended for production environments.
Upgrades to the regular release channel are typically performed two weeks after the
Fast release channel
.
:::note
Basic tier services are upgraded soon after the Fast release channel.
:::
Slow release channel (deferred upgrades) {#slow-release-channel-deferred-upgrades}
We offer a
Slow release
channel if you would like your services to receive upgrades after the regular release schedule. | {"source_file": "upgrades.md"} | [
-0.03164434805512428,
-0.05739377811551094,
0.058090779930353165,
-0.002634142292663455,
-0.005217062775045633,
-0.030315661802887917,
0.03532543033361435,
-0.0876292735338211,
-0.04002504423260689,
0.04205508530139923,
0.039509519934654236,
0.07456803321838379,
0.012944790534675121,
-0.01... |
96cc67d2-8619-4e8f-9f4f-154a66c68c50 | We offer a
Slow release
channel if you would like your services to receive upgrades after the regular release schedule.
Specifically, services will:
Be upgraded after the Fast and Regular release channels roll-outs are complete
Receive ClickHouse releases ~ 2 weeks after the regular release
Be meant for customers that want additional time to test ClickHouse releases on their non-production environments before the production upgrade. Non-production environments can either get upgrades on the Fast or the Regular release channel for testing and validation.
:::note
You can change release channels at any time. However, in certain cases, the change will only apply to future releases.
- Moving to a faster channel will immediately upgrade your service. i.e. Slow to Regular, Regular to Fast
- Moving to a slower channel will not downgrade your service and keep you on your current version until a newer one is available in that channel. i.e. Regular to Slow, Fast to Regular or Slow
:::
Scheduled upgrades {#scheduled-upgrades}
Users can configure an upgrade window for services in the Enterprise tier.
Select the service for which you wish to specify an upgrade scheduled, followed by
Settings
from the left menu. Scroll to
Scheduled upgrades
.
Selecting this option will allow users to select the day of the week/time window for database and cloud upgrades.
:::note
While scheduled upgrades follow the defined schedule, exceptions apply for critical security patches and vulnerability fixes. In cases where an urgent security issue is identified, upgrades may be performed outside the scheduled window. Customers will be notified of such exceptions as necessary.
::: | {"source_file": "upgrades.md"} | [
0.029873445630073547,
-0.07675565034151077,
0.021789904683828354,
-0.05419275537133217,
0.0033643897622823715,
-0.018057961016893387,
-0.05129288136959076,
-0.09706439822912216,
-0.005015490110963583,
0.02898089960217476,
0.05395808815956116,
0.09846600145101547,
-0.025326771661639214,
0.0... |
be0654bc-83e3-4394-86a4-a4d97b5b3989 | sidebar_title: 'SQL console'
slug: /cloud/get-started/sql-console
description: 'Run queries and create visualizations using the SQL Console.'
keywords: ['sql console', 'sql client', 'cloud console', 'console']
title: 'SQL console'
doc_type: 'guide' | {"source_file": "01_sql-console.md"} | [
0.0774201899766922,
-0.0438501201570034,
-0.04074566066265106,
0.012369584292173386,
-0.025109784677624702,
0.03531002253293991,
0.030154427513480186,
0.07552944868803024,
-0.0377822108566761,
0.07114681601524353,
-0.0018870694329962134,
-0.01744122803211212,
0.06741221249103546,
-0.050984... |
3e2c863e-1460-41db-ad34-df2842413740 | import Image from '@theme/IdealImage';
import table_list_and_schema from '@site/static/images/cloud/sqlconsole/table-list-and-schema.png';
import view_columns from '@site/static/images/cloud/sqlconsole/view-columns.png';
import abc from '@site/static/images/cloud/sqlconsole/abc.png';
import inspecting_cell_content from '@site/static/images/cloud/sqlconsole/inspecting-cell-content.png';
import sort_descending_on_column from '@site/static/images/cloud/sqlconsole/sort-descending-on-column.png';
import filter_on_radio_column_equal_gsm from '@site/static/images/cloud/sqlconsole/filter-on-radio-column-equal-gsm.png';
import add_more_filters from '@site/static/images/cloud/sqlconsole/add-more-filters.png';
import filtering_and_sorting_together from '@site/static/images/cloud/sqlconsole/filtering-and-sorting-together.png';
import create_a_query_from_sorts_and_filters from '@site/static/images/cloud/sqlconsole/create-a-query-from-sorts-and-filters.png';
import creating_a_query from '@site/static/images/cloud/sqlconsole/creating-a-query.png';
import run_selected_query from '@site/static/images/cloud/sqlconsole/run-selected-query.png';
import run_at_cursor_2 from '@site/static/images/cloud/sqlconsole/run-at-cursor-2.png';
import run_at_cursor from '@site/static/images/cloud/sqlconsole/run-at-cursor.png';
import cancel_a_query from '@site/static/images/cloud/sqlconsole/cancel-a-query.png';
import sql_console_save_query from '@site/static/images/cloud/sqlconsole/sql-console-save-query.png';
import sql_console_rename from '@site/static/images/cloud/sqlconsole/sql-console-rename.png';
import sql_console_share from '@site/static/images/cloud/sqlconsole/sql-console-share.png';
import sql_console_edit_access from '@site/static/images/cloud/sqlconsole/sql-console-edit-access.png';
import sql_console_add_team from '@site/static/images/cloud/sqlconsole/sql-console-add-team.png';
import sql_console_edit_member from '@site/static/images/cloud/sqlconsole/sql-console-edit-member.png';
import sql_console_access_queries from '@site/static/images/cloud/sqlconsole/sql-console-access-queries.png';
import search_hn from '@site/static/images/cloud/sqlconsole/search-hn.png';
import match_in_body from '@site/static/images/cloud/sqlconsole/match-in-body.png';
import pagination from '@site/static/images/cloud/sqlconsole/pagination.png';
import pagination_nav from '@site/static/images/cloud/sqlconsole/pagination-nav.png';
import download_as_csv from '@site/static/images/cloud/sqlconsole/download-as-csv.png';
import tabular_query_results from '@site/static/images/cloud/sqlconsole/tabular-query-results.png';
import switch_from_query_to_chart from '@site/static/images/cloud/sqlconsole/switch-from-query-to-chart.png';
import trip_total_by_week from '@site/static/images/cloud/sqlconsole/trip-total-by-week.png';
import bar_chart from '@site/static/images/cloud/sqlconsole/bar-chart.png'; | {"source_file": "01_sql-console.md"} | [
0.04541166499257088,
0.006642335094511509,
-0.0025084600783884525,
-0.030829889699816704,
0.009436987340450287,
0.018935497850179672,
0.041668396443128586,
0.03019840270280838,
-0.046865351498126984,
0.06658481061458588,
0.06941555440425873,
0.001459920546039939,
0.10554737597703934,
-0.00... |
f62c365b-a475-432c-bb33-dda02c178390 | import trip_total_by_week from '@site/static/images/cloud/sqlconsole/trip-total-by-week.png';
import bar_chart from '@site/static/images/cloud/sqlconsole/bar-chart.png';
import change_from_bar_to_area from '@site/static/images/cloud/sqlconsole/change-from-bar-to-area.png';
import update_query_name from '@site/static/images/cloud/sqlconsole/update-query-name.png';
import update_subtitle_etc from '@site/static/images/cloud/sqlconsole/update-subtitle-etc.png';
import adjust_axis_scale from '@site/static/images/cloud/sqlconsole/adjust-axis-scale.png'; | {"source_file": "01_sql-console.md"} | [
0.021183917298913002,
-0.006739977281540632,
0.036651358008384705,
0.026055030524730682,
-0.03429050371050835,
0.050553176552057266,
0.024585288017988205,
0.06659664958715439,
-0.05557999014854431,
0.04693041741847992,
0.06373428553342819,
-0.03268396854400635,
0.11757735908031464,
-0.0591... |
5facbb2a-f654-4753-8dbb-db19dfb3015b | SQL Console
SQL console is the fastest and easiest way to explore and query your databases in ClickHouse Cloud. You can use the SQL console to:
Connect to your ClickHouse Cloud Services
View, filter, and sort table data
Execute queries and visualize result data in just a few clicks
Share queries with team members and collaborate more effectively.
Exploring tables {#exploring-tables}
Viewing table list and schema info {#viewing-table-list-and-schema-info}
An overview of tables contained in your ClickHouse instance can be found in the left sidebar area. Use the database selector at the top of the left bar to view the tables in a specific database
Tables in the list can also be expanded to view columns and types
Exploring table data {#exploring-table-data}
Click on a table in the list to open it in a new tab. In the Table View, data can be easily viewed, selected, and copied. Note that structure and formatting are preserved when copy-pasting to spreadsheet applications such as Microsoft Excel and Google Sheets. You can flip between pages of table data (paginated in 30-row increments) using the navigation in the footer.
Inspecting cell data {#inspecting-cell-data}
The Cell Inspector tool can be used to view large amounts of data contained within a single cell. To open it, right-click on a cell and select 'Inspect Cell'. The contents of the cell inspector can be copied by clicking the copy icon in the top right corner of the inspector contents.
Filtering and sorting tables {#filtering-and-sorting-tables}
Sorting a table {#sorting-a-table}
To sort a table in the SQL console, open a table and select the 'Sort' button in the toolbar. This button will open a menu that will allow you to configure your sort. You can choose a column by which you want to sort and configure the ordering of the sort (ascending or descending). Select 'Apply' or press Enter to sort your table
The SQL console also allows you to add multiple sorts to a table. Click the 'Sort' button again to add another sort.
:::note
Sorts are applied in the order that they appear in the sort pane (top to bottom). To remove a sort, simply click the 'x' button next to the sort.
:::
Filtering a table {#filtering-a-table}
To filter a table in the SQL console, open a table and select the 'Filter' button. Just like sorting, this button will open a menu that will allow you to configure your filter. You can choose a column by which to filter and select the necessary criteria. The SQL console intelligently displays filter options that correspond to the type of data contained in the column.
When you're happy with your filter, you can select 'Apply' to filter your data. You can also add additional filters as shown below.
Similar to the sort functionality, click the 'x' button next to a filter to remove it.
Filtering and sorting together {#filtering-and-sorting-together} | {"source_file": "01_sql-console.md"} | [
0.07554958760738373,
-0.10997778922319412,
-0.047151561826467514,
0.03535490110516548,
-0.013267520815134048,
0.0345592200756073,
0.025410687550902367,
-0.02516021952033043,
-0.01969960890710354,
0.0704299584031105,
0.020735712721943855,
0.005499251186847687,
0.068642757833004,
-0.04417634... |
efa2d6f9-6189-4758-8057-b615a0b8b965 | Similar to the sort functionality, click the 'x' button next to a filter to remove it.
Filtering and sorting together {#filtering-and-sorting-together}
The SQL console allows you to filter and sort a table at the same time. To do this, add all desired filters and sorts using the steps described above and click the 'Apply' button.
Creating a query from filters and sorts {#creating-a-query-from-filters-and-sorts}
The SQL console can convert your sorts and filters directly into queries with one click. Simply select the 'Create Query' button from the toolbar with the sort and filter parameters of your choosing. After clicking 'Create query', a new query tab will open pre-populated with the SQL command corresponding to the data contained in your table view.
:::note
Filters and sorts are not mandatory when using the 'Create Query' feature.
:::
You can learn more about querying in the SQL console by reading the (link) query documentation.
Creating and running a query {#creating-and-running-a-query}
Creating a query {#creating-a-query}
There are two ways to create a new query in the SQL console.
Click the '+' button in the tab bar
Select the 'New Query' button from the left sidebar query list
Running a query {#running-a-query}
To run a query, type your SQL command(s) into the SQL Editor and click the 'Run' button or use the shortcut
cmd / ctrl + enter
. To write and run multiple commands sequentially, make sure to add a semicolon after each command.
Query Execution Options
By default, clicking the run button will run all commands contained in the SQL Editor. The SQL console supports two other query execution options:
Run selected command(s)
Run command at the cursor
To run selected command(s), highlight the desired command or sequence of commands and click the 'Run' button (or use the
cmd / ctrl + enter
shortcut). You can also select 'Run selected' from the SQL Editor context menu (opened by right-clicking anywhere within the editor) when a selection is present.
Running the command at the current cursor position can be achieved in two ways:
Select 'At Cursor' from the extended run options menu (or use the corresponding
cmd / ctrl + shift + enter
keyboard shortcut
Selecting 'Run at cursor' from the SQL Editor context menu
:::note
The command present at the cursor position will flash yellow on execution.
:::
Canceling a query {#canceling-a-query}
While a query is running, the 'Run' button in the Query Editor toolbar will be replaced with a 'Cancel' button. Simply click this button or press
Esc
to cancel the query. Note: Any results that have already been returned will persist after cancellation.
Saving a query {#saving-a-query}
Saving queries allows you to easily find them later and share them with your teammates. The SQL console also allows you to organize your queries into folders. | {"source_file": "01_sql-console.md"} | [
0.030023939907550812,
-0.06773935258388519,
-0.0016690907068550587,
0.11222472041845322,
-0.04854455962777138,
0.03596639260649681,
0.06866700947284698,
-0.02171352691948414,
-0.040070634335279465,
0.09695848822593689,
-0.002580150729045272,
0.06236576288938522,
-0.004468949977308512,
-0.0... |
556ed82d-b8d3-4e64-ba2a-7f97b13974af | Saving a query {#saving-a-query}
Saving queries allows you to easily find them later and share them with your teammates. The SQL console also allows you to organize your queries into folders.
To save a query, simply click the "Save" button immediately next to the "Run" button in the toolbar. Input the desired name and click "Save Query".
:::note
Using the shortcut
cmd / ctrl
+ s will also save any work in the current query tab.
:::
Alternatively, you can simultaneously name and save a query by clicking on "Untitled Query" in the toolbar, adjusting the name, and hitting Enter:
Query sharing {#query-sharing}
The SQL console allows you to easily share queries with your team members. The SQL console supports four levels of access that can be adjusted both globally and on a per-user basis:
Owner (can adjust sharing options)
Write access
Read-only access
No access
After saving a query, click the "Share" button in the toolbar. A modal with sharing options will appear:
To adjust query access for all organization members with access to the service, simply adjust the access level selector in the top line:
After applying the above, the query can now be viewed (and executed) by all team members with access to the SQL console for the service.
To adjust query access for specific members, select the desired team member from the "Add a team member" selector:
After selecting a team member, a new line item should appear with an access level selector:
Accessing shared queries {#accessing-shared-queries}
If a query has been shared with you, it will be displayed in the "Queries" tab of the SQL console left sidebar:
Linking to a query (permalinks) {#linking-to-a-query-permalinks}
Saved queries are also permalinked, meaning that you can send and receive links to shared queries and open them directly.
Values for any parameters that may exist in a query are automatically added to the saved query URL as query parameters. For example, if a query contains
{start_date: Date}
and
{end_date: Date}
parameters, the permalink can look like:
https://console.clickhouse.cloud/services/:serviceId/console/query/:queryId?param_start_date=2015-01-01¶m_end_date=2016-01-01
.
Advanced querying features {#advanced-querying-features}
Searching query results {#searching-query-results}
After a query is executed, you can quickly search through the returned result set using the search input in the result pane. This feature assists in previewing the results of an additional
WHERE
clause or simply checking to ensure that specific data is included in the result set. After inputting a value into the search input, the result pane will update and return records containing an entry that matches the inputted value. In this example, we'll look for all instances of
breakfast
in the
hackernews
table for comments that contain
ClickHouse
(case-insensitive): | {"source_file": "01_sql-console.md"} | [
0.021665478125214577,
-0.04003229737281799,
-0.100779227912426,
0.11523400992155075,
-0.08900400996208191,
0.11343638598918915,
0.06645592302083969,
-0.012944446876645088,
-0.004105398431420326,
0.04634819179773331,
-0.036174971610307693,
-0.014173089526593685,
0.09658069908618927,
-0.0131... |
14e3b038-e407-4016-b145-4b3697f2f3f0 | Note: Any field matching the inputted value will be returned. For example, the third record in the above screenshot does not match 'breakfast' in the
by
field, but the
text
field does:
Adjusting pagination settings {#adjusting-pagination-settings}
By default, the query result pane will display every result record on a single page. For larger result sets, it may be preferable to paginate results for easier viewing. This can be accomplished using the pagination selector in the bottom right corner of the result pane:
Selecting a page size will immediately apply pagination to the result set and navigation options will appear in the middle of the result pane footer
Exporting query result data {#exporting-query-result-data}
Query result sets can be easily exported to CSV format directly from the SQL console. To do so, open the
β’β’β’
menu on the right side of the result pane toolbar and select 'Download as CSV'.
Visualizing query data {#visualizing-query-data}
Some data can be more easily interpreted in chart form. You can quickly create visualizations from query result data directly from the SQL console in just a few clicks. As an example, we'll use a query that calculates weekly statistics for NYC taxi trips:
sql
SELECT
toStartOfWeek(pickup_datetime) AS week,
sum(total_amount) AS fare_total,
sum(trip_distance) AS distance_total,
count(*) AS trip_total
FROM
nyc_taxi
GROUP BY
1
ORDER BY
1 ASC
Without visualization, these results are difficult to interpret. Let's turn them into a chart.
Creating charts {#creating-charts}
To begin building your visualization, select the 'Chart' option from the query result pane toolbar. A chart configuration pane will appear:
We'll start by creating a simple bar chart tracking
trip_total
by
week
. To accomplish this, we'll drag the
week
field to the x-axis and the
trip_total
field to the y-axis:
Most chart types support multiple fields on numeric axes. To demonstrate, we'll drag the fare_total field onto the y-axis:
Customizing charts {#customizing-charts}
The SQL console supports ten chart types that can be selected from the chart type selector in the chart configuration pane. For example, we can easily change the previous chart type from Bar to an Area:
Chart titles match the name of the query supplying the data. Updating the name of the query will cause the Chart title to update as well:
A number of more advanced chart characteristics can also be adjusted in the 'Advanced' section of the chart configuration pane. To begin, we'll adjust the following settings:
Subtitle
Axis titles
Label orientation for the x-axis
Our chart will be updated accordingly: | {"source_file": "01_sql-console.md"} | [
-0.05289096385240555,
0.04933949559926987,
-0.03550207242369652,
0.0788978561758995,
-0.02890520729124546,
0.10066074132919312,
0.009303808212280273,
0.042672209441661835,
0.0318906307220459,
-0.03209792822599411,
-0.04825757071375847,
0.004895153921097517,
0.04818481206893921,
-0.09811075... |
c1080850-5573-49cc-804b-7b4e2004fd28 | Subtitle
Axis titles
Label orientation for the x-axis
Our chart will be updated accordingly:
In some scenarios, it may be necessary to adjust the axis scales for each field independently. This can also be accomplished in the 'Advanced' section of the chart configuration pane by specifying min and max values for an axis range. As an example, the above chart looks good, but in order to demonstrate the correlation between our
trip_total
and
fare_total
fields, the axis ranges need some adjustment: | {"source_file": "01_sql-console.md"} | [
0.027239717543125153,
-0.013010380789637566,
0.014244968071579933,
-0.07110797613859177,
0.004032013472169638,
0.05638371407985687,
-0.05312863364815712,
-0.028699714690446854,
0.05021759122610092,
0.009351816959679127,
0.00681223114952445,
0.0013040374033153057,
-0.02042967826128006,
0.06... |
735560dc-eef2-49cd-a38c-9d0143660173 | sidebar_label: 'Dashboards'
slug: /cloud/manage/dashboards
title: 'Dashboards'
description: 'The SQL Console''s dashboards feature allows you to collect and share visualizations from saved queries.'
doc_type: 'guide'
keywords: ['ClickHouse Cloud', 'dashboards', 'data visualization', 'SQL console dashboards', 'cloud analytics']
import BetaBadge from '@theme/badges/BetaBadge';
import Image from '@theme/IdealImage';
import dashboards_2 from '@site/static/images/cloud/dashboards/2_dashboards.png';
import dashboards_3 from '@site/static/images/cloud/dashboards/3_dashboards.png';
import dashboards_4 from '@site/static/images/cloud/dashboards/4_dashboards.png';
import dashboards_5 from '@site/static/images/cloud/dashboards/5_dashboards.png';
import dashboards_6 from '@site/static/images/cloud/dashboards/6_dashboards.png';
import dashboards_7 from '@site/static/images/cloud/dashboards/7_dashboards.png';
import dashboards_8 from '@site/static/images/cloud/dashboards/8_dashboards.png';
import dashboards_9 from '@site/static/images/cloud/dashboards/9_dashboards.png';
import dashboards_10 from '@site/static/images/cloud/dashboards/10_dashboards.png';
import dashboards_11 from '@site/static/images/cloud/dashboards/11_dashboards.png';
Dashboards
The SQL Console's dashboards feature allows you to collect and share visualizations from saved queries. Get started by saving and visualizing queries, adding query visualizations to a dashboard, and making the dashboard interactive using query parameters.
Core concepts {#core-concepts}
Query sharing {#query-sharing}
In order to share your dashboard with colleagues, please be sure to share the underlying saved query. To view a visualization, users must have, at a minimum, read-only access to the underlying saved query.
Interactivity {#interactivity}
Use
query parameters
to make your dashboard interactive. For instance, you can add a query parameter to a
WHERE
clause to function as a filter.
You can toggle the query parameter input via the
Global
filters side pane by selecting a βfilterβ type in the visualization settings. You can also toggle the query parameter input by linking to another object (like a table) on the dashboard. Please see the β
configure a filter
β section of the quick start guide below.
Quick start {#quick-start}
Let's create a dashboard to monitor our ClickHouse service using the
query_log
system table.
Quick start {#quick-start-1}
Create a saved query {#create-a-saved-query}
If you already have saved queries to visualize, you can skip this step.
Open a new query tab. Let's write a query to count query volume by day on a service using ClickHouse system tables:
We can view the results of the query in table format or start building visualizations from the chart view. For the next step, we'll go ahead and save the query as
queries over time
:
More documentation around saved queries can be found in the
Saving a Query section
. | {"source_file": "04_dashboards.md"} | [
-0.0019838258158415556,
0.03272712603211403,
-0.06192821264266968,
0.05474162474274635,
0.05933709442615509,
0.02252884767949581,
0.03886795789003372,
0.03801243007183075,
-0.018160825595259666,
0.06523169577121735,
0.020525436848402023,
-0.037606898695230484,
0.14038503170013428,
-0.02461... |
8007b0dc-1dc5-4dc7-be3d-96cad8433efd | More documentation around saved queries can be found in the
Saving a Query section
.
We can create and save another query,
query count by query kind
, to count the number of queries by query kind. Here's a bar chart visualization of the data in the SQL console.
Now that there's two queries, let's create a dashboard to visualize and collect these queries.
Create a dashboard {#create-a-dashboard}
Navigate to the Dashboards panel, and hit βNew Dashboardβ. After you assign a name, you'll have successfully created your first dashboard!
Add a visualization {#add-a-visualization}
There's two saved queries,
queries over time
and
query count by query kind
. Let's visualize the first as a line chart. Give your visualization a title and subtitle, and select the query to visualize. Next, select the βLineβ chart type, and assign an x and y axis.
Here, additional stylistic changes can also be made - like number formatting, legend layout, and axis labels.
Next, let's visualize the second query as a table, and position it below the line chart.
You've created your first dashboard by visualizing two saved queries!
Configure a filter {#configure-a-filter}
Let's make this dashboard interactive by adding a filter on query kind so you can display just the trends related to Insert queries. We'll accomplish this task using
query parameters
.
Click on the three dots next to the line chart, and click on the pencil button next to the query to open the in-line query editor. Here, we can edit the underlying saved query directly from the dashboard.
Now, when the yellow run query button is pressed, you'll see the same query from earlier filtered on just insert queries. Click on the save button to update the query. When you return to the chart settings, you'll be able to filter the line chart.
Now, using Global Filters on the top ribbon, you can toggle the filter by changing the input.
Suppose you want to link the line chart's filter to the table. You can do this by going back to the visualization settings, and changing the query_kind query parameter' value source to a table, and selecting the query_kind column as the field to link.
Now, you can control the filter on the line chart directly from the queries by kind table to make your dashboard interactive. | {"source_file": "04_dashboards.md"} | [
0.03145434707403183,
-0.06730221211910248,
-0.04965810477733612,
0.11312021315097809,
-0.11448792368173599,
0.07391069084405899,
0.031673263758420944,
0.03444095328450203,
0.025188922882080078,
0.021301625296473503,
-0.060746632516384125,
0.0021689198911190033,
0.12480514496564865,
-0.0328... |
a5a7ba90-1172-4a90-aa6b-eead2d2b2743 | sidebar_title: 'Query API endpoints'
slug: /cloud/features/query-api-endpoints
description: 'Easily spin up REST API endpoints from your saved queries'
keywords: ['api', 'query api endpoints', 'query endpoints', 'query rest api']
title: 'Query API endpoints'
doc_type: 'guide'
import {CardSecondary} from '@clickhouse/click-ui/bundled';
import Link from '@docusaurus/Link'
Query API endpoints
Building interactive data-driven applications requires not only a fast database, well-structured data, and optimized queries.
Your front-end and microservices also need an easy way to consume the data returned by those queries, preferably via well-structured APIs.
The
Query API Endpoints
feature allows you to create an API endpoint directly from any saved SQL query in the ClickHouse Cloud console.
You'll be able to access API endpoints via HTTP to execute your saved queries without needing to connect to your ClickHouse Cloud service via a native driver.
:::tip Guide
See the
Query API endpoints guide
for instructions on how to set up
query API endpoints in a few easy steps
::: | {"source_file": "03_query-endpoints.md"} | [
-0.02679893560707569,
-0.018282899633049965,
0.004006670787930489,
0.060307081788778305,
-0.04498681053519249,
0.017513440921902657,
-0.0396367684006691,
0.01773316040635109,
-0.015652304515242577,
0.022304262965917587,
-0.008012877777218819,
-0.02052835002541542,
0.06305120885372162,
-0.0... |
92839303-300d-4626-bb75-6507e84b9921 | sidebar_label: 'HyperDX'
slug: /cloud/manage/hyperdx
title: 'HyperDX'
description: 'Provides HyperDX, the UI for ClickStack - a production-grade observability platform built on ClickHouse and OpenTelemetry (OTel), unifying logs, traces, metrics, and sessions in a single high-performance scalable solution.'
doc_type: 'guide'
keywords: ['hyperdx', 'observability', 'integration', 'cloud features', 'monitoring']
import PrivatePreviewBadge from '@theme/badges/PrivatePreviewBadge';
import Image from '@theme/IdealImage';
import hyperdx_cloud from '@site/static/images/use-cases/observability/hyperdx_cloud.png';
HyperDX is the user interface for
ClickStack
- a production-grade observability platform built on ClickHouse and OpenTelemetry (OTel), unifying logs, traces, metrics and session in a single high-performance solution. Designed for monitoring and debugging complex systems, ClickStack enables developers and SREs to trace issues end-to-end without switching between tools or manually stitching together data using timestamps or correlation IDs.
HyperDX is a purpose-built frontend for exploring and visualizing observability data, supporting both Lucene-style and SQL queries, interactive dashboards, alerting, trace exploration, and moreβall optimized for ClickHouse as the backend.
HyperDX in ClickHouse Cloud allows users to enjoy a more turnkey ClickStack experience - no infrastructure to manage, no separate authentication to configure.
HyperDX can be launched with a single click and connected to your data - fully integrated into the ClickHouse Cloud authentication system for seamless, secure access to your observability insights.
Deployment {#main-concepts}
HyperDX in ClickHouse Cloud is currently in private preview and must be enabled at the organization level. Once enabled, users will find HyperDX available in the main left navigation menu when selecting any service.
To get started with HyperDX in ClickHouse Cloud, we recommend our dedicated
getting started guide
.
For further details on ClickStack, see the
full documentation
. | {"source_file": "hyperdx.md"} | [
-0.029399458318948746,
-0.010147783905267715,
-0.0016518901102244854,
-0.01812700927257538,
-0.010577578097581863,
-0.05874500423669815,
0.09901060163974762,
-0.0374259389936924,
-0.05533311516046524,
0.060007497668266296,
0.0031770053319633007,
-0.014919213950634003,
0.04475542530417442,
... |
668ac102-64b8-49db-83d6-db025e32765c | sidebar_title: 'Query insights'
slug: /cloud/get-started/query-insights
description: 'Visualize system.query_log data to simplify query debugging and performance optimization'
keywords: ['query insights', 'query log', 'query log ui', 'system.query_log insights']
title: 'Query insights'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import insights_overview from '@site/static/images/cloud/sqlconsole/insights_overview.png';
import insights_latency from '@site/static/images/cloud/sqlconsole/insights_latency.png';
import insights_recent from '@site/static/images/cloud/sqlconsole/insights_recent.png';
import insights_drilldown from '@site/static/images/cloud/sqlconsole/insights_drilldown.png';
import insights_query_info from '@site/static/images/cloud/sqlconsole/insights_query_info.png';
Query Insights
The
Query Insights
feature makes ClickHouse's built-in query log easier to use through various visualizations and tables. ClickHouse's
system.query_log
table is a key source of information for query optimization, debugging, and monitoring overall cluster health and performance.
Query overview {#query-overview}
After selecting a service, the
Monitoring
navigation item in the left sidebar should expand to reveal a new
Query insights
sub-item. Clicking on this option opens the new Query insights page:
Top-level metrics {#top-level-metrics}
The stat boxes at the top represent some basic top-level query metrics over the selected period of time. Beneath it, we've exposed three time-series charts representing query volume, latency, and error rate broken down by query kind (select, insert, other) over a selected time window. The latency chart can be further adjusted to display p50, p90, and p99 latencies:
Recent queries {#recent-queries}
Beneath the top-level metrics, a table displays query log entries (grouped by normalized query hash and user) over the selected time window:
Recent queries can be filtered and sorted by any available field. The table can also be configured to display or hide additional fields such as tables, p90, and p99 latencies.
Query drill-down {#query-drill-down}
Selecting a query from the recent queries table will open a flyout containing metrics and information specific to the selected query:
As we can see from the flyout, this particular query has been run more than 3000 times in the last 24 hours. All metrics in the
Query info
tab are aggregated metrics, but we can also view metrics from individual runs by selecting the
Query history
tab:
From this pane, the
Settings
and
Profile Events
items for each query run can be expanded to reveal additional information. | {"source_file": "02_query-insights.md"} | [
0.002696535550057888,
-0.011013361625373363,
-0.013575824908912182,
0.12697701156139374,
0.049063827842473984,
-0.06883325427770615,
0.09497886151075363,
0.009193940088152885,
0.005765617825090885,
0.02191915735602379,
0.0001342320756521076,
0.0030607259832322598,
0.0790572538971901,
-0.02... |
20e17bd1-b416-4127-9991-4d789d37715e | title: 'Deployment Options'
slug: /infrastructure/deployment-options
description: 'Deployment options available for ClickHouse customers'
keywords: ['bring yor own cloud', 'byoc', 'private', 'government', 'self-deployed']
doc_type: 'reference'
ClickHouse Deployment Options
ClickHouse provides a range of deployment options to cater to diverse customer requirements, offering varying degrees of control, compliance, and operational overhead.
This document outlines the distinct deployment types available, enabling users to select the optimal solution that aligns with their specific architectural preferences, regulatory obligations, and resource management strategies.
ClickHouse Cloud {#clickhouse-cloud}
ClickHouse Cloud is a fully managed, cloud-native service that delivers the power and speed of ClickHouse without the operational complexities of self-management.
This option is ideal for users who prioritize rapid deployment, scalability, and minimal administrative overhead.
ClickHouse Cloud handles all aspects of infrastructure provisioning, scaling, maintenance, and updates, allowing users to focus entirely on data analysis and application development.
It offers consumption-based pricing, and automatic scaling, ensuring reliable and cost-effective performance for analytical workloads. It is available across AWS, GCP and Azure, with direct marketplace billing options.
Learn more about
ClickHouse Cloud
.
Bring Your Own Cloud {#byoc}
ClickHouse Bring Your Own Cloud (BYOC) allows organizations to deploy and manage ClickHouse within their own cloud environment while leveraging a managed service layer. This option bridges the gap between the fully managed experience of ClickHouse Cloud and the complete control of self-managed deployments. With ClickHouse BYOC, users retain control over their data, infrastructure, and security policies, meeting specific compliance and regulatory requirements, while offloading operational tasks like patching, monitoring, and scaling to the ClickHouse. This model offers the flexibility of a private cloud deployment with the benefits of a managed service, making it suitable for large-scale deployments at enterprises with stringent security, governance, and data residency needs.
Learn more about
Bring Your Own Cloud
.
ClickHouse Private {#clickhouse-private}
ClickHouse Private is a self-deployed version of ClickHouse, leveraging the same proprietary technology that powers ClickHouse Cloud. This option delivers the highest degree of control, making it ideal for organizations with stringent compliance, networking, and security requirements, as well as for teams that possess the operational expertise to manage their own infrastructure. It benefits from regular updates and upgrades that are thoroughly tested in the ClickHouse Cloud environment, a feature-rich roadmap, and is backed by our expert support team.
Learn more about
ClickHouse Private
.
ClickHouse Government {#clickhouse-government} | {"source_file": "deployment-options.md"} | [
-0.001340940361842513,
-0.020130787044763565,
0.019768832251429558,
-0.03302324563264847,
-0.001271938788704574,
-0.039380211383104324,
0.00810279417783022,
-0.04086124151945114,
-0.010757618583738804,
0.07634623348712921,
0.021290570497512817,
0.018054744228720665,
0.021198680624365807,
-... |
dbcc8bdd-7ebc-4ad9-aaa5-9c4d5106eddf | Learn more about
ClickHouse Private
.
ClickHouse Government {#clickhouse-government}
ClickHouse Government is a self-deployed version of ClickHouse designed to meet the unique and rigorous demands of government agencies and public sector organizations that need isolated and accredited environments. This deployment option provides a highly secure, compliant, and isolated environment, focusing on FIPS 140-3 compliance utilizing OpenSSL, additional system hardening, and vulnerability management. It leverages the robust capabilities of ClickHouse Cloud while integrating specialized features and configurations to address the specific operational and security requirements of governmental entities. With ClickHouse Government, agencies can achieve high-performance analytics on sensitive data within a controlled and accredited infrastructure, backed by expert support tailored to public sector needs.
Learn more about
ClickHouse Government
. | {"source_file": "deployment-options.md"} | [
-0.01620425283908844,
-0.025876952335238457,
-0.052244994789361954,
-0.0027333064936101437,
0.015002294443547726,
-0.04499811306595802,
0.01201342511922121,
-0.04947172850370407,
-0.027816766873002052,
-0.0036218781024217606,
0.02285413071513176,
0.019149692729115486,
0.02410225383937359,
... |
1ae8f9c4-c670-4bcd-803c-cd0ffb31a74a | title: 'Warehouses'
slug: /cloud/reference/warehouses
keywords: ['compute separation', 'cloud', 'architecture', 'compute-compute', 'warehouse', 'warehouses', 'hydra']
description: 'Compute-compute separation in ClickHouse Cloud'
doc_type: 'reference'
import compute_1 from '@site/static/images/cloud/reference/compute-compute-1.png';
import compute_2 from '@site/static/images/cloud/reference/compute-compute-2.png';
import compute_3 from '@site/static/images/cloud/reference/compute-compute-3.png';
import compute_4 from '@site/static/images/cloud/reference/compute-compute-4.png';
import compute_5 from '@site/static/images/cloud/reference/compute-compute-5.png';
import compute_7 from '@site/static/images/cloud/reference/compute-compute-7.png';
import compute_8 from '@site/static/images/cloud/reference/compute-compute-8.png';
import Image from '@theme/IdealImage';
Warehouses
What is compute-compute separation? {#what-is-compute-compute-separation}
Compute-compute separation is available for Scale and Enterprise tiers.
Each ClickHouse Cloud service includes:
- A group of two or more ClickHouse nodes (or replicas) is required, but the child services can be single replica.
- An endpoint (or multiple endpoints created via ClickHouse Cloud UI console), which is a service URL that you use to connect to the service (for example,
https://dv2fzne24g.us-east-1.aws.clickhouse.cloud:8443
).
- An object storage folder where the service stores all the data and partially metadata:
:::note
Child single services can scale vertically unlike single parent services.
:::
Fig. 1 - current service in ClickHouse Cloud
Compute-compute separation allows users to create multiple compute node groups, each with its own endpoint, that are using the same object storage folder, and thus, with the same tables, views, etc.
Each compute node group will have its own endpoint so you can choose which set of replicas to use for your workloads. Some of your workloads may be satisfied with only one small-size replica, and others may require full high-availability (HA) and hundreds of gigs of memory. Compute-compute separation also allows you to separate read operations from write operations so they don't interfere with each other:
Fig. 2 - compute separation in ClickHouse Cloud
It is possible to create extra services that share the same data with your existing services, or create a completely new setup with multiple services sharing the same data.
What is a warehouse? {#what-is-a-warehouse}
In ClickHouse Cloud, a
warehouse
is a set of services that share the same data.
Each warehouse has a primary service (this service was created first) and secondary service(s). For example, in the screenshot below you can see a warehouse "DWH Prod" with two services:
Primary service
DWH Prod
Secondary service
DWH Prod Subservice
Fig. 3 - Warehouse example
All services in a warehouse share the same:
Region (for example, us-east1) | {"source_file": "warehouses.md"} | [
-0.014962092973291874,
0.008415327407419682,
-0.03994603827595711,
-0.026488175615668297,
-0.002127410378307104,
-0.10119161009788513,
0.011212171986699104,
-0.0742519348859787,
-0.013067301362752914,
0.05283355712890625,
0.06678786128759384,
-0.10365962237119675,
0.06545114517211914,
-0.0... |
510c4cdc-8a46-45ba-80f6-d170f1099e10 | Primary service
DWH Prod
Secondary service
DWH Prod Subservice
Fig. 3 - Warehouse example
All services in a warehouse share the same:
Region (for example, us-east1)
Cloud service provider (AWS, GCP or Azure)
ClickHouse database version
You can sort services by the warehouse that they belong to.
Access controls {#access-controls}
Database credentials {#database-credentials}
Because all in a warehouse share the same set of tables, they also share access controls to those other services. This means that all database users that are created in Service 1 will also be able to use Service 2 with the same permissions (grants for tables, views, etc), and vice versa. Users will use another endpoint for each service but will use the same username and password. In other words,
users are shared across services that work with the same storage:
Fig. 4 - user Alice was created in Service 1, but she can use the same credentials to access all services that share same data
Network access control {#network-access-control}
It is often useful to restrict specific services from being used by other applications or ad-hoc users. This can be done by using network restrictions, similar to how it is configured currently for regular services (navigate to
Settings
in the service tab in the specific service in ClickHouse Cloud console).
You can apply IP filtering setting to each service separately, which means you can control which application can access which service. This allows you to restrict users from using specific services:
Fig. 5 - Alice is restricted to access Service 2 because of the network settings
Read vs read-write {#read-vs-read-write}
Sometimes it is useful to restrict write access to a specific service and allow writes only by a subset of services in a warehouse. This can be done when creating the second and nth services (the first service should always be read-write):
Fig. 6 - Read-write and Read-only services in a warehouse
:::note
1. Read-only services currently allow user management operations (create, drop, etc). This behavior may be changed in the future.
2. Currently, refreshable materialized views are executed on all services in the warehouse, including read-only services. This behavior will be changed in the future, however, and they will be executed on RW services only.
:::
Scaling {#scaling}
Each service in a warehouse can be adjusted to your workload in terms of:
- Number of nodes (replicas). The primary service (the service that was created first in the warehouse) should have 2 or more nodes. Each secondary service can have 1 or more nodes.
- Size of nodes (replicas)
- If the service should scale automatically
- If the service should be idled on inactivity (cannot be applied to the first service in the group - please see the
Limitations
section)
Changes in behavior {#changes-in-behavior} | {"source_file": "warehouses.md"} | [
-0.06360907107591629,
-0.07719282060861588,
-0.05275307595729828,
-0.05209125205874443,
-0.04974621161818504,
0.012139193713665009,
0.056899815797805786,
-0.05752133950591087,
0.05832379683852196,
-0.01390706468373537,
-0.0056459857150912285,
0.054254304617643356,
0.11419560015201569,
0.01... |
8ed0c520-7f6c-4bbd-abe7-63e9d9e4537b | Changes in behavior {#changes-in-behavior}
Once compute-compute is enabled for a service (at least one secondary service was created), the
clusterAllReplicas()
function call with the
default
cluster name will utilize only replicas from the service where it was called. That means, if there are two services connected to the same dataset, and
clusterAllReplicas(default, system, processes)
is called from service 1, only processes running on service 1 will be shown. If needed, you can still call
clusterAllReplicas('all_groups.default', system, processes)
for example to reach all replicas.
Limitations {#limitations}
Primary service should always be up and should not be idled (limitation will be removed some time after GA).
During the private preview and some time after GA, the primary service (usually the existing service that you want to extend by adding other services) will be always up and will have the idling setting disabled. You will not be able to stop or idle the primary service if there is at least one secondary service. Once all secondary services are removed, you can stop or idle the original service again.
Sometimes workloads cannot be isolated.
Though the goal is to give you an option to isolate database workloads from each other, there can be corner cases where one workload in one service will affect another service sharing the same data. These are quite rare situations that are mostly connected to OLTP-like workloads.
All read-write services are doing background merge operations.
When inserting data to ClickHouse, the database at first inserts the data to some staging partitions, and then performs merges in the background. These merges can consume memory and CPU resources. When two read-write services share the same storage, they both are performing background operations. That means that there can be a situation where there is an
INSERT
query in Service 1, but the merge operation is completed by Service 2. Note that read-only services do not execute background merges, thus they don't spend their resources on this operation.
All read-write services are performing S3Queue table engine insert operations.
When creating a S3Queue table on a RW service, all other RW services in the WH may perform reading data from S3 and writing data to the database.
Inserts in one read-write service can prevent another read-write service from idling if idling is enabled.
As a result, a second service performs background merge operations for the first service. These background operations can prevent the second service from going to sleep when idling. Once the background operations are finished, the service will be idled. Read-only services are not affected and will be idled without delay. | {"source_file": "warehouses.md"} | [
-0.11073377728462219,
-0.06281241774559021,
-0.03504747897386551,
0.02402157336473465,
0.005712154321372509,
-0.03883892670273781,
0.024296477437019348,
-0.09127255529165268,
-0.003078155918046832,
0.016401443630456924,
-0.0059724291786551476,
0.03663226589560509,
0.022302601486444473,
-0.... |
d9e5473d-2953-40a3-a5ed-7751e46217ea | CREATE/RENAME/DROP DATABASE queries could be blocked by idled/stopped services by default.
These queries can hang. To bypass this, you can run database management queries with
settings distributed_ddl_task_timeout=0
at the session or per query level. For example:
sql
CREATE DATABASE db_test_ddl_single_query_setting
SETTINGS distributed_ddl_task_timeout=0
Currently there is a soft limit of 5 services per warehouse.
Contact the support team if you need more than 5 services in a single warehouse.
Pricing {#pricing}
Compute prices are the same for all services in a warehouse (primary and secondary). Storage is billed only once - it is included in the first (original) service.
Please refer to the pricing calculator on the
pricing
page, which will help estimate the cost based on your workload size and tier selection.
Backups {#backups}
As all services in a single warehouse share the same storage, backups are made only on the primary (initial) service. By this, the data for all services in a warehouse is backed up.
If you restore a backup from a primary service of a warehouse, it will be restored to a completely new service, not connected to the existing warehouse. You can then add more services to the new service immediately after the restore is finished.
Using warehouses {#using-warehouses}
Creating a warehouse {#creating-a-warehouse}
To create a warehouse, you need to create a second service that will share the data with an existing service. This can be done by clicking the plus sign on any of the existing services:
Fig. 7 - Click the plus sign to create a new service in a warehouse
On the service creation screen, the original service will be selected in the dropdown as the source for the data of the new service. Once created, these two services will form a warehouse.
Renaming a warehouse {#renaming-a-warehouse}
There are two ways to rename a warehouse:
You can select "Sort by warehouse" on the services page in the top right corner, and then click the pencil icon near the warehouse name
You can click the warehouse name on any of the services and rename the warehouse there
Deleting a warehouse {#deleting-a-warehouse}
Deleting a warehouse means deleting all the compute services and the data (tables, views, users, etc.). This action cannot be undone.
You can only delete a warehouse by deleting the first service created. To do this:
Delete all the services that were created in addition to the service that was created first;
Delete the first service (warning: all warehouse's data will be deleted on this step). | {"source_file": "warehouses.md"} | [
-0.005410303827375174,
-0.07072410732507706,
0.0017242400208488107,
0.05116235092282295,
-0.06929182261228561,
-0.04123036563396454,
-0.04103648290038109,
-0.07761536538600922,
0.04709353297948837,
0.06367919594049454,
0.010882176458835602,
-0.02379401959478855,
0.08639220148324966,
-0.039... |
097c069f-26da-47fb-a0d9-16437a5138b2 | slug: /cloud/reference/shared-merge-tree
sidebar_label: 'SharedMergeTree'
title: 'SharedMergeTree'
keywords: ['SharedMergeTree']
description: 'Describes the SharedMergeTree table engine'
doc_type: 'reference'
import shared_merge_tree from '@site/static/images/cloud/reference/shared-merge-tree-1.png';
import shared_merge_tree_2 from '@site/static/images/cloud/reference/shared-merge-tree-2.png';
import Image from '@theme/IdealImage';
SharedMergeTree table engine
The SharedMergeTree table engine family is a cloud-native replacement of the ReplicatedMergeTree engines that is optimized to work on top of shared storage (e.g. Amazon S3, Google Cloud Storage, MinIO, Azure Blob Storage). There is a SharedMergeTree analog for every specific MergeTree engine type, i.e. ReplacingSharedMergeTree replaces ReplacingReplicatedMergeTree.
The SharedMergeTree table engine family powers ClickHouse Cloud. For an end-user, nothing needs to be changed to start using the SharedMergeTree engine family instead of the ReplicatedMergeTree based engines. It provides the following additional benefits:
Higher insert throughput
Improved throughput of background merges
Improved throughput of mutations
Faster scale-up and scale-down operations
More lightweight strong consistency for select queries
A significant improvement that the SharedMergeTree brings is that it provides a deeper separation of compute and storage compared to the ReplicatedMergeTree. You can see below how the ReplicatedMergeTree separate the compute and storage:
As you can see, even though the data stored in the ReplicatedMergeTree are in object storage, the metadata still resides on each of the clickhouse-servers. This means that for every replicated operation, metadata also needs to be replicated on all replicas.
Unlike ReplicatedMergeTree, SharedMergeTree doesn't require replicas to communicate with each other. Instead, all communication happens through shared storage and clickhouse-keeper. SharedMergeTree implements asynchronous leaderless replication and uses clickhouse-keeper for coordination and metadata storage. This means that metadata doesn't need to be replicated as your service scales up and down. This leads to faster replication, mutation, merges and scale-up operations. SharedMergeTree allows for hundreds of replicas for each table, making it possible to dynamically scale without shards. A distributed query execution approach is used in ClickHouse Cloud to utilize more compute resources for a query.
Introspection {#introspection}
Most of the system tables used for introspection of ReplicatedMergeTree exist for SharedMergeTree, except for
system.replication_queue
and
system.replicated_fetches
as there is no replication of data and metadata that occurs. However, SharedMergeTree has corresponding alternatives for these two tables.
system.virtual_parts | {"source_file": "shared-merge-tree.md"} | [
-0.03255810961127281,
-0.0034189035650342703,
0.003959139809012413,
0.02116331458091736,
0.054445382207632065,
-0.016146667301654816,
-0.015867220237851143,
-0.037634965032339096,
-0.009018892422318459,
0.04776928946375847,
0.06007407605648041,
-0.01548149436712265,
0.06539475172758102,
-0... |
7b161a88-9afc-4b56-9eab-c9494a194115 | system.virtual_parts
This table serves as the alternative to
system.replication_queue
for SharedMergeTree. It stores information about the most recent set of current parts, as well as future parts in progress such as merges, mutations, and dropped partitions.
system.shared_merge_tree_fetches
This table is the alternative to
system.replicated_fetches
SharedMergeTree. It contains information about current in-progress fetches of primary keys and checksums into memory.
Enabling SharedMergeTree {#enabling-sharedmergetree}
SharedMergeTree
is enabled by default.
For services that support SharedMergeTree table engine, you don't need to enable anything manually. You can create tables the same way as you did before and it will automatically use a SharedMergeTree-based table engine corresponding to the engine specified in your CREATE TABLE query.
sql
CREATE TABLE my_table(
key UInt64,
value String
)
ENGINE = MergeTree
ORDER BY key
This will create the table
my_table
using the SharedMergeTree table engine.
You don't need to specify
ENGINE=MergeTree
as
default_table_engine=MergeTree
in ClickHouse Cloud. The following query is identical to the query above.
sql
CREATE TABLE my_table(
key UInt64,
value String
)
ORDER BY key
If you use Replacing, Collapsing, Aggregating, Summing, VersionedCollapsing, or the Graphite MergeTree tables it will be automatically converted to the corresponding SharedMergeTree based table engine.
sql
CREATE TABLE myFirstReplacingMT
(
`key` Int64,
`someCol` String,
`eventTime` DateTime
)
ENGINE = ReplacingMergeTree
ORDER BY key;
For a given table, you can check which table engine was used with the
CREATE TABLE
statement with
SHOW CREATE TABLE
:
sql
SHOW CREATE TABLE myFirstReplacingMT;
sql
CREATE TABLE default.myFirstReplacingMT
( `key` Int64, `someCol` String, `eventTime` DateTime )
ENGINE = SharedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
ORDER BY key
Settings {#settings}
Some settings behavior is significantly changed:
insert_quorum
-- all inserts to SharedMergeTree are quorum inserts (written to shared storage) so this setting is not needed when using SharedMergeTree table engine.
insert_quorum_parallel
-- all inserts to SharedMergeTree are quorum inserts (written to shared storage) so this setting is not needed when using SharedMergeTree table engine.
select_sequential_consistency
-- doesn't require quorum inserts, will trigger additional load to clickhouse-keeper on
SELECT
queries
Consistency {#consistency} | {"source_file": "shared-merge-tree.md"} | [
-0.02668267861008644,
-0.05220554396510124,
-0.06947240233421326,
-0.0027614326681941748,
-0.0671585202217102,
-0.0361587218940258,
0.017310237511992455,
-0.038796279579401016,
-0.04164644330739975,
0.07776837050914764,
0.042494386434555054,
0.006692563183605671,
0.043871741741895676,
-0.0... |
b122e1f0-9895-47e1-8f9f-47bfe95c8b7d | select_sequential_consistency
-- doesn't require quorum inserts, will trigger additional load to clickhouse-keeper on
SELECT
queries
Consistency {#consistency}
SharedMergeTree provides better lightweight consistency than ReplicatedMergeTree. When inserting into SharedMergeTree, you don't need to provide settings such as
insert_quorum
or
insert_quorum_parallel
. Inserts are quorum inserts, meaning that the metadata will be stored in ClickHouse-Keeper, and the metadata is replicated to at least the quorum of ClickHouse-keepers. Each replica in your cluster will asynchronously fetch new information from ClickHouse-Keeper.
Most of the time, you should not be using
select_sequential_consistency
or
SYSTEM SYNC REPLICA LIGHTWEIGHT
. The asynchronous replication should cover most scenarios and has very low latency. In the rare case where you absolutely need to prevent stale reads, follow these recommendations in order of preference:
If you are executing your queries in the same session or the same node for your reads and writes, using
select_sequential_consistency
is not needed because your replica will already have the most recent metadata.
If you write to one replica and read from another, you can use
SYSTEM SYNC REPLICA LIGHTWEIGHT
to force the replica to fetch the metadata from ClickHouse-Keeper.
Use
select_sequential_consistency
as a setting as part of your query. | {"source_file": "shared-merge-tree.md"} | [
-0.055995408445596695,
-0.08604591339826584,
-0.030451327562332153,
0.04115522280335426,
-0.08792714029550552,
-0.03377087414264679,
-0.0720083937048912,
-0.08033054322004318,
-0.001021334552206099,
0.012299791909754276,
0.028927689418196678,
-0.0006329899188131094,
0.030867714434862137,
-... |
df742fdb-bf56-4218-8db5-477a607efc3c | slug: /cloud/reference/shared-catalog
sidebar_label: 'Shared catalog'
title: 'Shared catalog and shared database engine'
keywords: ['SharedCatalog', 'SharedDatabaseEngine']
description: 'Describes the Shared Catalog component and the Shared database engine in ClickHouse Cloud'
doc_type: 'reference'
Shared catalog and shared database engine {#shared-catalog-and-shared-database-engine}
Available exclusively in ClickHouse Cloud (and first party partner cloud services)
Shared Catalog is a cloud-native component responsible for replicating metadata and DDL operations of databases and tables that use stateless engines across replicas in ClickHouse Cloud. It enables consistent and centralized state management for these objects, ensuring metadata consistency even in dynamic or partially offline environments.
Shared Catalog does
not replicate tables themselves
, but ensures that all replicas have a consistent view of the database and table definitions by replicating DDL queries and metadata.
It supports replication of the following database engines:
Shared
PostgreSQL
MySQL
DataLakeCatalog
Architecture and metadata storage {#architecture-and-metadata-storage}
All metadata and DDL query history in Shared Catalog is stored centrally in ZooKeeper. Nothing is persisted on local disk. This architecture ensures:
Consistent state across all replicas
Statelessness of compute nodes
Fast, reliable replica bootstrapping
Shared database engine {#shared-database-engine}
The
Shared database engine
works in conjunction with Shared Catalog to manage databases whose tables use
stateless table engines
such as
SharedMergeTree
. These table engines do not write persistent state to disk and are compatible with dynamic compute environments.
Shared database engine builds on and improves the behavior of the Replicated database engine while offering additional guarantees and operational benefits.
Key benefits {#key-benefits}
Atomic CREATE TABLE ... AS SELECT
Table creation and data insertion are executed atomicallyβeither the entire operation completes, or the table is not created at all.
RENAME TABLE between databases
Enables atomic movement of tables across databases:
sql
RENAME TABLE db1.table TO db2.table;
Automatic table recovery with UNDROP TABLE
Dropped tables are retained for a default period of 8 hours and can be restored:
sql
UNDROP TABLE my_table;
The retention window is configurable via server settings.
Improved compute-compute separation
Unlike the Replicated database engine, which requires all replicas to be online to process a DROP query, Shared Catalog performs centralized metadata deletion. This allows operations to succeed even when some replicas are offline.
Automatic metadata replication | {"source_file": "shared-catalog.md"} | [
-0.0395943745970726,
-0.08259095251560211,
-0.028349177911877632,
-0.021928193047642708,
-0.014232003130018711,
-0.006757979281246662,
-0.04158299043774605,
-0.10017310082912445,
-0.007955122739076614,
-0.01770016737282276,
-0.005249656271189451,
0.012827692553400993,
0.03147759288549423,
... |
8f09d469-8186-440d-88d5-b740a96bbde0 | Automatic metadata replication
Shared Catalog ensures that database definitions are automatically replicated to all servers on startup. Operators do not need to manually configure or synchronize metadata on new instances.
Centralized, versioned metadata state
Shared Catalog stores a single source of truth in ZooKeeper. When a replica starts, it fetches the latest state and applies the diff to reach consistency. During query execution, the system can wait for other replicas to reach at least the required version of metadata to ensure correctness.
Usage in ClickHouse Cloud {#usage-in-clickhouse-cloud}
For end users, using Shared Catalog and the Shared database engine requires no additional configuration. Database creation is the same as always:
sql
CREATE DATABASE my_database;
ClickHouse Cloud automatically assigns the Shared database engine to databases. Any tables created within such a database using stateless engines will automatically benefit from Shared Catalogβs replication and coordination capabilities.
Summary {#summary}
Shared Catalog and the Shared database engine provide:
Reliable and automatic metadata replication for stateless engines
Stateless compute with no local metadata persistence
Atomic operations for complex DDL
Improved support for elastic, ephemeral, or partially offline compute environments
Seamless usage for ClickHouse Cloud users
These capabilities make Shared Catalog the foundation for scalable, cloud-native metadata management in ClickHouse Cloud. | {"source_file": "shared-catalog.md"} | [
-0.03997121751308441,
-0.07055246084928513,
-0.01719070039689541,
-0.0009809471666812897,
0.0012720711529254913,
-0.04445141181349754,
-0.01930627040565014,
-0.08774758875370026,
-0.002262985799461603,
0.028712859377264977,
-0.005634265020489693,
-0.0024802067782729864,
0.03890031948685646,
... |
6c2c2cdf-27d9-464d-b1cd-5a1e31bdde63 | title: 'Replica-aware routing'
slug: /manage/replica-aware-routing
description: 'How to use Replica-aware routing to increase cache re-use'
keywords: ['cloud', 'sticky endpoints', 'sticky', 'endpoints', 'sticky routing', 'routing', 'replica aware routing']
doc_type: 'guide'
import PrivatePreviewBadge from '@theme/badges/PrivatePreviewBadge';
Replica-aware routing
Replica-aware routing (also known as sticky sessions, sticky routing, or session affinity) utilizes
Envoy proxy's ring hash load balancing
. The main purpose of replica-aware routing is to increase the chance of cache reuse. It does not guarantee isolation.
When enabling replica-aware routing for a service, we allow a wildcard subdomain on top of the service hostname. For a service with the host name
abcxyz123.us-west-2.aws.clickhouse.cloud
, you can use any hostname which matches
*.sticky.abcxyz123.us-west-2.aws.clickhouse.cloud
to visit the service:
|Example hostnames|
|---|
|
aaa.sticky.abcxyz123.us-west-2.aws.clickhouse.cloud
|
|
000.sticky.abcxyz123.us-west-2.aws.clickhouse.cloud
|
|
clickhouse-is-the-best.sticky.abcxyz123.us-west-2.aws.clickhouse.cloud
|
When Envoy receives a hostname that matches such a pattern, it will compute the routing hash based on the hostname and find the corresponding ClickHouse server on the hash ring based on the computed hash. Assuming that there is no ongoing change to the service (e.g. server restarts, scale out/in), Envoy will always choose the same ClickHouse server to connect to.
Note the original hostname will still use
LEAST_CONNECTION
load balancing, which is the default routing algorithm.
Limitations of Replica-aware routing {#limitations-of-replica-aware-routing}
Replica-aware routing does not guarantee isolation {#replica-aware-routing-does-not-guarantee-isolation}
Any disruption to the service, e.g. server pod restarts (due to any reason like a version upgrade, crash, vertical scaling up, etc.), server scaled out / in, will cause a disruption to the routing hash ring. This will cause connections with the same hostname to land on a different server pod.
Replica-aware routing does not work out of the box with private link {#replica-aware-routing-does-not-work-out-of-the-box-with-private-link}
Customers need to manually add a DNS entry to make name resolution work for the new hostname pattern. It is possible that this can cause imbalance in the server load if customers use it incorrectly.
Configuring replica-aware routing {#configuring-replica-aware-routing}
To enable Replica-aware routing, please contact
our support team
. | {"source_file": "replica-aware-routing.md"} | [
-0.06828923523426056,
-0.04136001318693161,
0.018674295395612717,
0.013333176262676716,
-0.009627683088183403,
-0.04664799943566322,
0.00014438992366194725,
-0.07989838719367981,
0.03935978561639786,
0.0585879385471344,
-0.05362946540117264,
0.08416502922773361,
0.069296695291996,
-0.02570... |
66ca0576-498d-459d-bc5b-748fb5c1f961 | slug: /integrations/prometheus
sidebar_label: 'Prometheus'
title: 'Prometheus'
description: 'Export ClickHouse metrics to Prometheus'
keywords: ['prometheus', 'grafana', 'monitoring', 'metrics', 'exporter']
doc_type: 'reference'
import prometheus_grafana_metrics_endpoint from '@site/static/images/integrations/prometheus-grafana-metrics-endpoint.png';
import prometheus_grafana_dropdown from '@site/static/images/integrations/prometheus-grafana-dropdown.png';
import prometheus_grafana_chart from '@site/static/images/integrations/prometheus-grafana-chart.png';
import prometheus_grafana_alloy from '@site/static/images/integrations/prometheus-grafana-alloy.png';
import prometheus_grafana_metrics_explorer from '@site/static/images/integrations/prometheus-grafana-metrics-explorer.png';
import prometheus_datadog from '@site/static/images/integrations/prometheus-datadog.png';
import Image from '@theme/IdealImage';
Prometheus Integration
The feature supports integrating
Prometheus
to monitor ClickHouse Cloud services. Access to Prometheus metrics is exposed via the
ClickHouse Cloud API
endpoint that allows users to securely connect and export metrics into their Prometheus metrics collector. These metrics can be integrated with dashboards e.g., Grafana, Datadog for visualization.
To get started,
generate an API key
.
Prometheus endpoint API to retrieve ClickHouse Cloud metrics {#prometheus-endpoint-api-to-retrieve-clickhouse-cloud-metrics}
API reference {#api-reference}
| Method | Path | Description |
| ------ | ------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------ |
| GET |
https://api.clickhouse.cloud/v1/organizations/:organizationId/services/:serviceId/prometheus?filtered_metrics=[true \| false]
| Returns metrics for a specific service |
| GET |
https://api.clickhouse.cloud/v1/organizations/:organizationId/prometheus?filtered_metrics=[true \| false]
| Returns metrics for all services in an organization |
Request Parameters
| Name | Location | Type |
| ---------------- | ------------------ |------------------ |
| Organization ID | Endpoint address | uuid |
| Service ID | Endpoint address | uuid (optional) |
| filtered_metrics | Query param | boolean (optional) |
Authentication {#authentication}
Use your ClickHouse Cloud API key for basic authentication:
```bash
Username:
Password:
Example request
export KEY_SECRET=
export KEY_ID=
export ORG_ID=
For all services in $ORG_ID
curl --silent --user $KEY_ID:$KEY_SECRET https://api.clickhouse.cloud/v1/organizations/$ORG_ID/prometheus?filtered_metrics=true
For a single service only | {"source_file": "prometheus.md"} | [
-0.10777769237756729,
0.03623897209763527,
-0.018290404230356216,
0.03155960142612457,
0.004131901077926159,
-0.08345754444599152,
0.023104898631572723,
0.02158350683748722,
-0.05479408800601959,
-0.01844603940844536,
0.018841950222849846,
-0.10758549720048904,
0.02574329823255539,
-0.0152... |
f3245326-e63a-4af8-b02e-5020d48585de | For all services in $ORG_ID
curl --silent --user $KEY_ID:$KEY_SECRET https://api.clickhouse.cloud/v1/organizations/$ORG_ID/prometheus?filtered_metrics=true
For a single service only
export SERVICE_ID=
curl --silent --user $KEY_ID:$KEY_SECRET https://api.clickhouse.cloud/v1/organizations/$ORG_ID/services/$SERVICE_ID/prometheus?filtered_metrics=true
```
Sample response {#sample-response}
```response
HELP ClickHouse_ServiceInfo Information about service, including cluster status and ClickHouse version
TYPE ClickHouse_ServiceInfo untyped
ClickHouse_ServiceInfo{clickhouse_org="c2ba4799-a76e-456f-a71a-b021b1fafe60",clickhouse_service="12f4a114-9746-4a75-9ce5-161ec3a73c4c",clickhouse_service_name="test service",clickhouse_cluster_status="running",clickhouse_version="24.5",scrape="full"} 1
HELP ClickHouseProfileEvents_Query Number of queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.
TYPE ClickHouseProfileEvents_Query counter
ClickHouseProfileEvents_Query{clickhouse_org="c2ba4799-a76e-456f-a71a-b021b1fafe60",clickhouse_service="12f4a114-9746-4a75-9ce5-161ec3a73c4c",clickhouse_service_name="test service",hostname="c-cream-ma-20-server-3vd2ehh-0",instance="c-cream-ma-20-server-3vd2ehh-0",table="system.events"} 6
HELP ClickHouseProfileEvents_QueriesWithSubqueries Count queries with all subqueries
TYPE ClickHouseProfileEvents_QueriesWithSubqueries counter
ClickHouseProfileEvents_QueriesWithSubqueries{clickhouse_org="c2ba4799-a76e-456f-a71a-b021b1fafe60",clickhouse_service="12f4a114-9746-4a75-9ce5-161ec3a73c4c",clickhouse_service_name="test service",hostname="c-cream-ma-20-server-3vd2ehh-0",instance="c-cream-ma-20-server-3vd2ehh-0",table="system.events"} 230
HELP ClickHouseProfileEvents_SelectQueriesWithSubqueries Count SELECT queries with all subqueries
TYPE ClickHouseProfileEvents_SelectQueriesWithSubqueries counter
ClickHouseProfileEvents_SelectQueriesWithSubqueries{clickhouse_org="c2ba4799-a76e-456f-a71a-b021b1fafe60",clickhouse_service="12f4a114-9746-4a75-9ce5-161ec3a73c4c",clickhouse_service_name="test service",hostname="c-cream-ma-20-server-3vd2ehh-0",instance="c-cream-ma-20-server-3vd2ehh-0",table="system.events"} 224
HELP ClickHouseProfileEvents_FileOpen Number of files opened.
TYPE ClickHouseProfileEvents_FileOpen counter
ClickHouseProfileEvents_FileOpen{clickhouse_org="c2ba4799-a76e-456f-a71a-b021b1fafe60",clickhouse_service="12f4a114-9746-4a75-9ce5-161ec3a73c4c",clickhouse_service_name="test service",hostname="c-cream-ma-20-server-3vd2ehh-0",instance="c-cream-ma-20-server-3vd2ehh-0",table="system.events"} 4157
HELP ClickHouseProfileEvents_Seek Number of times the 'lseek' function was called.
TYPE ClickHouseProfileEvents_Seek counter | {"source_file": "prometheus.md"} | [
-0.02776939421892166,
-0.0029130931943655014,
-0.07790620625019073,
0.010943424887955189,
0.016355032101273537,
-0.09956225007772446,
0.004716841503977776,
-0.12295287102460861,
0.04433988779783249,
0.0027156074065715075,
0.037171371281147,
-0.0926559790968895,
-0.001603132812306285,
-0.07... |
45ddf5e3-0e67-4076-a26e-b3614b1a5869 | HELP ClickHouseProfileEvents_Seek Number of times the 'lseek' function was called.
TYPE ClickHouseProfileEvents_Seek counter
ClickHouseProfileEvents_Seek{clickhouse_org="c2ba4799-a76e-456f-a71a-b021b1fafe60",clickhouse_service="12f4a114-9746-4a75-9ce5-161ec3a73c4c",clickhouse_service_name="test service",hostname="c-cream-ma-20-server-3vd2ehh-0",instance="c-cream-ma-20-server-3vd2ehh-0",table="system.events"} 1840
HELP ClickPipes_Info Always equal to 1. Label "clickpipe_state" contains the current state of the pipe: Stopped/Provisioning/Running/Paused/Failed
TYPE ClickPipes_Info gauge
ClickPipes_Info{clickhouse_org="11dfa1ec-767d-43cb-bfad-618ce2aaf959",clickhouse_service="82b83b6a-5568-4a82-aa78-fed9239db83f",clickhouse_service_name="ClickPipes demo instace",clickpipe_id="642bb967-940b-459e-9f63-a2833f62ec44",clickpipe_name="Confluent demo pipe",clickpipe_source="confluent",clickpipe_status="Running"} 1
HELP ClickPipes_SentEvents_Total Total number of records sent to ClickHouse
TYPE ClickPipes_SentEvents_Total counter
ClickPipes_SentEvents_Total{clickhouse_org="11dfa1ec-767d-43cb-bfad-618ce2aaf959",clickhouse_service="82b83b6a-5568-4a82-aa78-fed9239db83f",clickhouse_service_name="ClickPipes demo instace",clickpipe_id="642bb967-940b-459e-9f63-a2833f62ec44",clickpipe_name="Confluent demo pipe",clickpipe_source="confluent"} 5534250
HELP ClickPipes_SentBytesCompressed_Total Total compressed bytes sent to ClickHouse.
TYPE ClickPipes_SentBytesCompressed_Total counter
ClickPipes_SentBytesCompressed_Total{clickhouse_org="11dfa1ec-767d-43cb-bfad-618ce2aaf959",clickhouse_service="82b83b6a-5568-4a82-aa78-fed9239db83f",clickhouse_service_name
="ClickPipes demo instace",clickpipe_id="642bb967-940b-459e-9f63-a2833f62ec44",clickpipe_name="Confluent demo pipe",clickpipe_source="confluent"} 380837520
ClickPipes_SentBytesCompressed_Total{clickhouse_org="11dfa1ec-767d-43cb-bfad-618ce2aaf959",clickhouse_service="82b83b6a-5568-4a82-aa78-fed9239db83f",clickhouse_service_name
HELP ClickPipes_FetchedBytes_Total Total uncompressed bytes fetched from the source.
TYPE ClickPipes_FetchedBytes_Total counter
ClickPipes_FetchedBytes_Total{clickhouse_org="11dfa1ec-767d-43cb-bfad-618ce2aaf959",clickhouse_service="82b83b6a-5568-4a82-aa78-fed9239db83f",clickhouse_service_name="ClickPipes demo instace",clickpipe_id="642bb967-940b-459e-9f63-a2833f62ec44",clickpipe_name="Confluent demo pipe",clickpipe_source="confluent"} 873286202
HELP ClickPipes_Errors_Total Total errors ingesting data.
TYPE ClickPipes_Errors_Total counter
ClickPipes_Errors_Total{clickhouse_org="11dfa1ec-767d-43cb-bfad-618ce2aaf959",clickhouse_service="82b83b6a-5568-4a82-aa78-fed9239db83f",clickhouse_service_name="ClickPipes demo instace",clickpipe_id="642bb967-940b-459e-9f63-a2833f62ec44",clickpipe_name="Confluent demo pipe",clickpipe_source="confluent"} 0
HELP ClickPipes_SentBytes_Total Total uncompressed bytes sent to ClickHouse.
TYPE ClickPipes_SentBytes_Total counter | {"source_file": "prometheus.md"} | [
-0.024121470749378204,
-0.030599048361182213,
-0.01794758252799511,
0.011117739602923393,
0.02296934463083744,
-0.027417197823524475,
0.08698927611112595,
0.025536008179187775,
-0.027482129633426666,
-0.02603748068213463,
0.005441515240818262,
-0.10563486814498901,
0.008728701621294022,
-0... |
347182ce-b794-4505-b92e-7505fd19223a | HELP ClickPipes_SentBytes_Total Total uncompressed bytes sent to ClickHouse.
TYPE ClickPipes_SentBytes_Total counter
ClickPipes_SentBytes_Total{clickhouse_org="11dfa1ec-767d-43cb-bfad-618ce2aaf959",clickhouse_service="82b83b6a-5568-4a82-aa78-fed9239db83f",clickhouse_service_name="ClickPipes demo instace",clickpipe_id="642bb967-940b-459e-9f63-a2833f62ec44",clickpipe_name="Confluent demo pipe",clickpipe_source="confluent"} 477187967
HELP ClickPipes_FetchedBytesCompressed_Total Total compressed bytes fetched from the source. If data is uncompressed at the source, this will equal ClickPipes_FetchedBytes_Total
TYPE ClickPipes_FetchedBytesCompressed_Total counter
ClickPipes_FetchedBytesCompressed_Total{clickhouse_org="11dfa1ec-767d-43cb-bfad-618ce2aaf959",clickhouse_service="82b83b6a-5568-4a82-aa78-fed9239db83f",clickhouse_service_name="ClickPipes demo instace",clickpipe_id="642bb967-940b-459e-9f63-a2833f62ec44",clickpipe_name="Confluent demo pipe",clickpipe_source="confluent"} 873286202
HELP ClickPipes_FetchedEvents_Total Total number of records fetched from the source.
TYPE ClickPipes_FetchedEvents_Total counter
ClickPipes_FetchedEvents_Total{clickhouse_org="11dfa1ec-767d-43cb-bfad-618ce2aaf959",clickhouse_service="82b83b6a-5568-4a82-aa78-fed9239db83f",clickhouse_service_name="ClickPipes demo instace",clickpipe_id="642bb967-940b-459e-9f63-a2833f62ec44",clickpipe_name="Confluent demo pipe",clickpipe_source="confluent"} 5535376
```
Metric labels {#metric-labels}
All metrics have the following labels:
|Label|Description|
|---|---|
|clickhouse_org|Organization ID|
|clickhouse_service|Service ID|
|clickhouse_service_name|Service name|
For ClickPipes, metrics will also have the following labels:
| Label | Description |
| --- | --- |
| clickpipe_id | ClickPipe ID |
| clickpipe_name | ClickPipe Name |
| clickpipe_source | ClickPipe Source Type |
Information metrics {#information-metrics}
ClickHouse Cloud provides a special metric
ClickHouse_ServiceInfo
which is a
gauge
that always has the value of
1
. This metric contains all the
Metric Labels
as well as the following labels:
|Label|Description|
|---|---|
|clickhouse_cluster_status|Status of the service. Could be one of the following: [
awaking
\|
running
\|
degraded
\|
idle
\|
stopped
]|
|clickhouse_version|Version of the ClickHouse server that the service is running|
|scrape|Indicates the status of the last scrape. Could be either
full
or
partial
|
|full|Indicates that there were no errors during the last metrics scrape|
|partial|Indicates that there were some errors during the last metrics scrape and only
ClickHouse_ServiceInfo
metric was returned.|
Requests to retrieve metrics will not resume an idled service. In the case that a service is in the
idle
state, only the
ClickHouse_ServiceInfo
metric will be returned. | {"source_file": "prometheus.md"} | [
-0.0644269734621048,
-0.028852809220552444,
-0.06627968698740005,
0.015134279616177082,
-0.006814365740865469,
-0.11238498240709305,
0.09412303566932678,
-0.009983229450881481,
-0.024278704077005386,
0.03794129937887192,
-0.048667725175619125,
-0.0852232351899147,
0.05427438020706177,
-0.0... |
b853a32a-826c-4765-b30d-27ef68fa135a | Requests to retrieve metrics will not resume an idled service. In the case that a service is in the
idle
state, only the
ClickHouse_ServiceInfo
metric will be returned.
For ClickPipes, there's a similar
ClickPipes_Info
metric
gauge
that in addition of the
Metric Labels
contains the following labels:
| Label | Description |
| --- | --- |
| clickpipe_state | The current state of the pipe |
Configuring Prometheus {#configuring-prometheus}
The Prometheus server collects metrics from configured targets at the given intervals. Below is an example configuration for the Prometheus server to use the ClickHouse Cloud Prometheus Endpoint:
```yaml
global:
scrape_interval: 15s
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
- job_name: "clickhouse"
static_configs:
- targets: ["api.clickhouse.cloud"]
scheme: https
params:
filtered_metrics: ["true"]
metrics_path: "/v1/organizations/
/prometheus"
basic_auth:
username:
password:
honor_labels: true
```
Note the
honor_labels
configuration parameter needs to be set to
true
for the instance label to be properly populated. Additionally,
filtered_metrics
is set to
true
in the above example, but should be configured based on user preference.
Integrating with Grafana {#integrating-with-grafana}
Users have two primary ways to integrate with Grafana:
Metrics Endpoint
β This approach has the advantage of not requiring any additional components or infrastructure. This offering is limited to Grafana Cloud and only requires the ClickHouse Cloud Prometheus Endpoint URL and credentials.
Grafana Alloy
- Grafana Alloy is a vendor-neutral distribution of the OpenTelemetry (OTel) Collector, replacing the Grafana Agent. This can be used as a scraper, is deployable in your own infrastructure, and is compatible with any Prometheus endpoint.
We provide instructions on using these options below, focusing on the details specific to the ClickHouse Cloud Prometheus Endpoint.
Grafana Cloud with metrics endpoint {#grafana-cloud-with-metrics-endpoint}
Login to your Grafana Cloud account
Add a new connection by selecting the
Metrics Endpoint
Configure the Scrape URL to point to the Prometheus endpoint and use basic auth to configure your connection with the API key/secret
Test the connection to ensure you are able to connect
Once configured, you should see the metrics in the drop-down that you can select to configure dashboards:
Grafana Cloud with Alloy {#grafana-cloud-with-alloy}
If you are using Grafana Cloud, Alloy can be installed by navigating to the Alloy menu in Grafana and following the onscreen instructions: | {"source_file": "prometheus.md"} | [
-0.08403381705284119,
-0.017087414860725403,
-0.014719312079250813,
0.03094983845949173,
-0.010504110716283321,
-0.11354075372219086,
0.04577673599123955,
-0.07741914689540863,
0.01899845525622368,
-0.03860193118453026,
-0.012758973985910416,
-0.040261637419462204,
-0.024795474484562874,
0... |
f6dfb384-a3f3-4584-ac54-eff6833913d0 | Grafana Cloud with Alloy {#grafana-cloud-with-alloy}
If you are using Grafana Cloud, Alloy can be installed by navigating to the Alloy menu in Grafana and following the onscreen instructions:
This should configure Alloy with a
prometheus.remote_write
component for sending data to a Grafana Cloud endpoint with an authentication token. Users then need to only modify the Alloy config (found in
/etc/alloy/config.alloy
for Linux) to include a scraper for the ClickHouse Cloud Prometheus Endpoint.
The following shows an example configuration for Alloy with a
prometheus.scrape
component for scraping metrics from the ClickHouse Cloud Endpoint, as well as the automatically configured
prometheus.remote_write
component. Note that the
basic_auth
configuration component contains our Cloud API key ID and secret as the username and password, respectively.
```yaml
prometheus.scrape "clickhouse_cloud" {
// Collect metrics from the default listen address.
targets = [{
address
= "https://api.clickhouse.cloud/v1/organizations/:organizationId/prometheus?filtered_metrics=true",
// e.g. https://api.clickhouse.cloud/v1/organizations/97a33bdb-4db3-4067-b14f-ce40f621aae1/prometheus?filtered_metrics=true
}]
honor_labels = true
basic_auth {
username = "KEY_ID"
password = "KEY_SECRET"
}
forward_to = [prometheus.remote_write.metrics_service.receiver]
// forward to metrics_service below
}
prometheus.remote_write "metrics_service" {
endpoint {
url = "https://prometheus-prod-10-prod-us-central-0.grafana.net/api/prom/push"
basic_auth {
username = "
"
password = "
"
}
}
}
```
Note the
honor_labels
configuration parameter needs to be set to
true
for the instance label to be properly populated.
Grafana self-managed with Alloy {#grafana-self-managed-with-alloy}
Self-managed users of Grafana can find the instructions for installing the Alloy agent
here
. We assume users have configured Alloy to send Prometheus metrics to their desired destination. The
prometheus.scrape
component below causes Alloy to scrape the ClickHouse Cloud Endpoint. We assume
prometheus.remote_write
receives the scraped metrics. Adjust the
forward_to key
to the target destination if this does not exist.
```yaml
prometheus.scrape "clickhouse_cloud" {
// Collect metrics from the default listen address.
targets = [{
address
= "https://api.clickhouse.cloud/v1/organizations/:organizationId/prometheus?filtered_metrics=true",
// e.g. https://api.clickhouse.cloud/v1/organizations/97a33bdb-4db3-4067-b14f-ce40f621aae1/prometheus?filtered_metrics=true
}]
honor_labels = true
basic_auth {
username = "KEY_ID"
password = "KEY_SECRET"
}
forward_to = [prometheus.remote_write.metrics_service.receiver]
// forward to metrics_service. Modify to your preferred receiver
}
```
Once configured, you should see ClickHouse related metrics in your metrics explorer: | {"source_file": "prometheus.md"} | [
-0.16050443053245544,
0.01067486684769392,
-0.046766217797994614,
0.026636743918061256,
-0.019130714237689972,
-0.057318057864904404,
0.0068237148225307465,
-0.0484006293118,
-0.023385772481560707,
-0.00591915100812912,
-0.020095517858862877,
-0.09957648813724518,
0.03438208997249603,
-0.0... |
69d02bcb-50fd-4fc8-97f5-5ba82805fdd6 | Once configured, you should see ClickHouse related metrics in your metrics explorer:
Note the
honor_labels
configuration parameter needs to be set to
true
for the instance label to be properly populated.
Integrating with Datadog {#integrating-with-datadog}
You can use the Datadog
Agent
and
OpenMetrics integration
to collect metrics from the ClickHouse Cloud endpoint. Below is a simple example configuration for this agent and integration. Please note though that you may want to select only those metrics that you care about the most. The catch-all example below will export many thousands of metric-instance combinations which Datadog will treat as custom metrics.
```yaml
init_config:
instances:
- openmetrics_endpoint: 'https://api.clickhouse.cloud/v1/organizations/97a33bdb-4db3-4067-b14f-ce40f621aae1/prometheus?filtered_metrics=true'
namespace: 'clickhouse'
metrics:
- '^ClickHouse.*'
username: username
password: password
``` | {"source_file": "prometheus.md"} | [
0.020545437932014465,
-0.03999236598610878,
-0.07242277264595032,
0.014146094210445881,
-0.02929699793457985,
-0.016097024083137512,
0.005205363035202026,
-0.07097359746694565,
0.0257989764213562,
-0.01883145421743393,
0.03345624357461929,
-0.1376829594373703,
0.038082972168922424,
-0.0119... |
83784d94-8bb9-44fc-a437-72bc647d4fdf | title: 'Notifications'
slug: /cloud/notifications
description: 'Notifications for your ClickHouse Cloud service'
keywords: ['cloud', 'notifications']
doc_type: 'guide'
import Image from '@theme/IdealImage';
import notifications_1 from '@site/static/images/cloud/manage/notifications-1.png';
import notifications_2 from '@site/static/images/cloud/manage/notifications-2.png';
import notifications_3 from '@site/static/images/cloud/manage/notifications-3.png';
import notifications_4 from '@site/static/images/cloud/manage/notifications-4.png';
ClickHouse Cloud sends notifications about critical events related to your service or organization. There are a few concepts to keep in mind to understand how notifications are sent and configured:
Notification category
: Refers to groups of notifications such as billing notifications, service related notifications etc. Within each category, there are multiple notifications for which the delivery mode can be configured.
Notification severity
: Notification severity can be
info
,
warning
, or
critical
depending on how important a notification is. This is not configurable.
Notification channel
: Channel refers to the mode by which the notification is received such as UI, email, Slack etc. This is configurable for most notifications.
Receiving notifications {#receiving-notifications}
Notifications can be received via various channels. For now, ClickHouse Cloud supports receiving notifications through email, ClickHouse Cloud UI, and Slack. You can click on the bell icon in the top left menu to view current notifications, which opens a flyout. Clicking the button
View All
the bottom of the flyout will take you to a page that shows an activity log of all notifications.
Customizing notifications {#customizing-notifications}
For each notification, you can customize how you receive the notification. You can access the settings screen from the notifications flyout or from the second tab on the notifications activity log.
Cloud users can customize notifications delivered via the Cloud UI, and these customizations are reflected for each individual user. Cloud users can also customize notifications delivered to their own emails, but only users with admin permissions can customize notifications delivered to custom emails and notifications delivered to Slack channels.
To configure delivery for a specific notification, click on the pencil icon to modify the notification delivery channels.
:::note
Certain
required
notifications such as
Payment failed
are not configurable.
:::
Supported notifications {#supported-notifications}
Currently, we send out notifications related to billing (payment failure, usage exceeded ascertain threshold, etc.) as well as notifications related to scaling events (scaling completed, scaling blocked etc.). | {"source_file": "notifications.md"} | [
-0.00952855870127678,
-0.0717114731669426,
0.02766324207186699,
0.015482284128665924,
0.07541575282812119,
-0.057495273649692535,
0.09258196502923965,
-0.05593125894665718,
0.04433819651603699,
0.022350845858454704,
0.04233148321509361,
-0.05052175372838974,
0.08601762354373932,
0.04536793... |
7fa9cb59-ebab-4e8d-8b70-27b704b52ef9 | description: 'Advanced dashboard in ClickHouse Cloud'
keywords: ['monitoring', 'observability', 'advanced dashboard', 'dashboard', 'observability
dashboard']
sidebar_label: 'Advanced dashboard'
sidebar_position: 45
slug: /cloud/manage/monitor/advanced-dashboard
title: 'Advanced dashboard in ClickHouse Cloud'
doc_type: 'guide'
import AdvancedDashboard from '@site/static/images/cloud/manage/monitoring/advanced_dashboard.png';
import NativeAdvancedDashboard from '@site/static/images/cloud/manage/monitoring/native_advanced_dashboard.png';
import EditVisualization from '@site/static/images/cloud/manage/monitoring/edit_visualization.png';
import InsertedRowsSec from '@site/static/images/cloud/manage/monitoring/inserted_rows_max_parts_for_partition.png';
import ResourceIntensiveQuery from '@site/static/images/cloud/manage/monitoring/resource_intensive_query.png';
import SelectedRowsPerSecond from '@site/static/images/cloud/manage/monitoring/selected_rows_sec.png';
import Image from '@theme/IdealImage';
Monitoring your database system in a production environment is vital to
understanding your deployment health so that you can prevent or solve outages.
The advanced dashboard is a lightweight tool designed to give you deep insights
into your ClickHouse system and its environment, helping you stay ahead of
performance bottlenecks, system failures, and inefficiencies.
The advanced dashboard is available in both ClickHouse OSS (Open Source Software)
and Cloud. In this article we will show you how to use the advanced dashboard in
Cloud.
Accessing the advanced dashboard {#accessing-the-advanced-dashboard}
The advanced dashboard can be accessed by navigating to:
Left side panel
Monitoring
β
Advanced dashboard
Accessing the native advanced dashboard {#accessing-the-native-advanced-dashboard}
The native advanced dashboard can be accessed by navigating to:
Left side panel
Monitoring
β
Advanced dashboard
Clicking
You can still access the native advanced dashboard.
This will open the native advanced dashboard in a new tab. You will need to
authenticate to access the dashboard.
Each visualization has a SQL query associated with it that populates it. You can
edit this query by clicking on the pen icon.
Out-of-box visualizations {#out-of-box-visualizations}
The default charts in the Advanced Dashboard are designed to provide real-time
visibility into your ClickHouse system. Below is a list with descriptions for
each chart. They are grouped into three categories to help you navigate them.
ClickHouse specific {#clickhouse-specific}
These metrics are tailored to monitor the health and performance of your
ClickHouse instance. | {"source_file": "advanced_dashboard.md"} | [
0.01525028608739376,
-0.00816557090729475,
-0.07631109654903412,
0.03674999251961708,
0.060726407915353775,
-0.031145351007580757,
0.04515557736158371,
0.055769264698028564,
-0.030669009312987328,
0.10177528113126755,
0.03593391180038452,
-0.08118809014558792,
0.11174917966127396,
-0.00844... |
6ffc1038-71e8-4454-9bd5-8249a59f9e3b | ClickHouse specific {#clickhouse-specific}
These metrics are tailored to monitor the health and performance of your
ClickHouse instance.
| Metric | Description |
|---------------------------|------------------------------------------------------------------------------------------|
| Queries Per Second | Tracks the rate of queries being processed |
| Selected Rows/Sec | Indicates the number of rows being read by queries |
| Inserted Rows/Sec | Measures the data ingestion rate |
| Total MergeTree Parts | Shows the number of active parts in MergeTree tables, helping identify unbatched inserts |
| Max Parts for Partition | Highlights the maximum number of parts in any partition |
| Queries Running | Displays the number of queries currently executing |
| Selected Bytes Per Second | Indicates the volume of data being read by queries |
System health specific {#system-health-specific}
Monitoring the underlying system is just as important as watching ClickHouse itself.
| Metric | Description |
|---------------------------|---------------------------------------------------------------------------|
| IO Wait | Tracks I/O wait times |
| CPU Wait | Measures delays caused by CPU resource contention |
| Read From Disk | Tracks the number of bytes read from disks or block devices |
| Read From Filesystem | Tracks the number of bytes read from the filesystem, including page cache |
| Memory (tracked, bytes) | Shows memory usage for processes tracked by ClickHouse |
| Load Average (15 minutes) | Report the current load average 15 from the system |
| OS CPU Usage (Userspace) | CPU Usage running userspace code |
| OS CPU Usage (Kernel) | CPU Usage running kernel code |
ClickHouse Cloud specific {#clickhouse-cloud-specific}
ClickHouse Cloud stores data using object storage (S3 type). Monitoring this
interface can help detect issues. | {"source_file": "advanced_dashboard.md"} | [
0.021156346425414085,
-0.07327039539813995,
-0.054056316614151,
0.05180230736732483,
-0.012830224819481373,
-0.054046932607889175,
0.03772487863898277,
0.01614418253302574,
0.009249335154891014,
0.007900742813944817,
0.01330648921430111,
-0.048016246408224106,
0.04239579290151596,
-0.03850... |
a255745e-c7cc-475b-b668-633cb958d46d | ClickHouse Cloud specific {#clickhouse-cloud-specific}
ClickHouse Cloud stores data using object storage (S3 type). Monitoring this
interface can help detect issues.
| Metric | Description |
|--------------------------------|-------------------------------------------------------------|
| S3 Read wait | Measures the latency of read requests to S3 |
| S3 read errors per second | Tracks the read errors rate |
| Read From S3 (bytes/sec) | Tracks the rate data is read from S3 storage |
| Disk S3 write req/sec | Monitors the frequency of write operations to S3 storage |
| Disk S3 read req/sec | Monitors the frequency of read operations to S3 storage |
| Page cache hit rate | The hit rate of the page cache |
| Filesystem cache hit rate | Hit rate of the filesystem cache |
| Filesystem cache size | The current size of the filesystem cache |
| Network send bytes/sec | Tracks the current speed of incoming network traffic |
| Network receive bytes/sec | Tracks the current speed of outbound network traffic |
| Concurrent network connections | Tracks the number of current concurrent network connections |
Identifying issues using the advanced dashboard {#identifying-issues-with-the-advanced-dashboard}
Having this real-time view of the health of your ClickHouse service greatly helps
mitigate issues before they impact your business or help solve them. Below are a
few issues you can spot using the advanced dashboard.
Unbatched inserts {#unbatched-inserts}
As described in the
best practices documentation
, it is recommended to always
bulk insert data into ClickHouse if able to do so synchronously.
A bulk insert with a reasonable batch size reduces the number of parts created
during ingestion, resulting in more efficient write-on disks and fewer merge
operations.
The key metrics to spot sub-optimized insert are
Inserted Rows/sec
and
Max Parts for Partition
The example above shows two spikes in
Inserted Rows/sec
and
Max Parts for Partition
between 13h and 14h. This indicates that we ingest data at a reasonable speed.
Then we see another big spike on
Max Parts for Partition
after 16h but a
very slow
Inserted Rows/sec speed
. A lot of parts are being created with
very little data generated, which indicates that the size of the parts is
sub-optimal.
Resource intensive query {#resource-intensive-query}
It is common to run SQL queries that consume a large amount of resources, such as
CPU or memory. However, it is important to monitor these queries and understand
their impact on your deployment's overall performance. | {"source_file": "advanced_dashboard.md"} | [
-0.016858484596014023,
-0.07901749759912491,
-0.08969442546367645,
0.014955321326851845,
0.023128638043999672,
-0.054218556731939316,
0.02546744979918003,
-0.03598911315202713,
0.032432228326797485,
0.055632103234529495,
0.003604998579248786,
-0.02475125901401043,
0.05505931004881859,
-0.0... |
bdfc2163-e3f8-46e2-9dd1-c6b05d4d1664 | A sudden change in resource consumption without a change in query throughput can
indicate more expensive queries being executed. Depending on the type of queries
you are running, this can be expected, but spotting them from the advanced
dashboard is good.
Below is an example of CPU usage peaking without significantly changing the
number of queries per second executed.
Bad primary key design {#bad-primary-key-design}
Another issue you can spot using the advanced dashboard is a bad primary key design.
As described in
"A practical introduction to primary indexes in ClickHouse"
,
choosing the primary key to fit best your use case will greatly improve performance
by reducing the number of rows ClickHouse needs to read to execute your query.
One of the metrics you can follow to spot potential improvements in primary keys
is
Selected Rows per second
. A sudden peak in the number of selected rows can
indicate both a general increase in overall query throughput, and queries that
select a large number of rows to execute their query.
Using the timestamp as a filter, you can find the queries executed at the time
of the peak in the table
system.query_log
.
For example, running a query that shows all the queries executed between 11 am
and 11 am on a certain day to understand what queries are reading too many rows:
sql title="Query"
SELECT
type,
event_time,
query_duration_ms,
query,
read_rows,
tables
FROM system.query_log
WHERE has(databases, 'default') AND (event_time >= '2024-12-23 11:20:00') AND (event_time <= '2024-12-23 11:30:00') AND (type = 'QueryFinish')
ORDER BY query_duration_ms DESC
LIMIT 5
FORMAT VERTICAL
```response title="Response"
Row 1:
ββββββ
type: QueryFinish
event_time: 2024-12-23 11:22:55
query_duration_ms: 37407
query: SELECT
toStartOfMonth(review_date) AS month,
any(product_title),
avg(star_rating) AS avg_stars
FROM amazon_reviews_no_pk
WHERE
product_category = 'Home'
GROUP BY
month,
product_id
ORDER BY
month DESC,
product_id ASC
LIMIT 20
read_rows: 150957260
tables: ['default.amazon_reviews_no_pk']
Row 2:
ββββββ
type: QueryFinish
event_time: 2024-12-23 11:26:50
query_duration_ms: 7325
query: SELECT
toStartOfMonth(review_date) AS month,
any(product_title),
avg(star_rating) AS avg_stars
FROM amazon_reviews_no_pk
WHERE
product_category = 'Home'
GROUP BY
month,
product_id
ORDER BY
month DESC,
product_id ASC
LIMIT 20
read_rows: 150957260
tables: ['default.amazon_reviews_no_pk'] | {"source_file": "advanced_dashboard.md"} | [
0.0022542078513652086,
-0.02470347099006176,
-0.043049655854701996,
0.050130899995565414,
-0.0028583556413650513,
-0.11899248510599136,
0.013386239297688007,
0.0420791432261467,
0.03263439983129501,
0.03865491971373558,
-0.07986275106668472,
-0.0064772628247737885,
0.020211001858115196,
-0... |
87aa4039-2089-4b40-954b-6e5f465e2923 | Row 3:
ββββββ
type: QueryFinish
event_time: 2024-12-23 11:24:10
query_duration_ms: 3270
query: SELECT
toStartOfMonth(review_date) AS month,
any(product_title),
avg(star_rating) AS avg_stars
FROM amazon_reviews_pk
WHERE
product_category = 'Home'
GROUP BY
month,
product_id
ORDER BY
month DESC,
product_id ASC
LIMIT 20
read_rows: 6242304
tables: ['default.amazon_reviews_pk']
Row 4:
ββββββ
type: QueryFinish
event_time: 2024-12-23 11:28:10
query_duration_ms: 2786
query: SELECT
toStartOfMonth(review_date) AS month,
any(product_title),
avg(star_rating) AS avg_stars
FROM amazon_reviews_pk
WHERE
product_category = 'Home'
GROUP BY
month,
product_id
ORDER BY
month DESC,
product_id ASC
LIMIT 20
read_rows: 6242304
tables: ['default.amazon_reviews_pk']
```
In this example, we can see the same query being executed against two
tables
amazon_reviews_no_pk
and
amazon_reviews_pk
. It can be concluded that
someone was testing a primary key option for the table
amazon_reviews
. | {"source_file": "advanced_dashboard.md"} | [
0.03909279406070709,
-0.03730342537164688,
-0.06816227734088898,
0.10563594847917557,
0.023254023864865303,
0.026095425710082054,
0.047889430075883865,
-0.039006348699331284,
0.03258049860596657,
0.033141087740659714,
0.03415359556674957,
-0.05185908451676369,
0.031716134399175644,
-0.0987... |
b04e463d-d204-45e7-8a71-ffec7ae1f6a8 | sidebar_label: 'Overview'
sidebar_position: 1
title: 'ClickHouse Cloud API'
slug: /cloud/manage/api/api-overview
description: 'Learn about ClickHouse Cloud API'
doc_type: 'reference'
keywords: ['ClickHouse Cloud', 'API overview', 'cloud API', 'REST API', 'programmatic access']
ClickHouse Cloud API
Overview {#overview}
The ClickHouse Cloud API is a REST API designed for developers to easily manage
organizations and services on ClickHouse Cloud. Using our Cloud API, you can
create and manage services, provision API keys, add or remove members in your
organization, and more.
Learn how to create your first API key and start using the ClickHouse Cloud API.
Swagger (OpenAPI) Endpoint and UI {#swagger-openapi-endpoint-and-ui}
The ClickHouse Cloud API is built on the open-source
OpenAPI specification
to allow for predictable client-side consumption. If you need to programmatically
consume the ClickHouse Cloud API docs, we offer a JSON-based Swagger endpoint
via https://api.clickhouse.cloud/v1. You can also find the API docs via
the
Swagger UI
.
:::note
If your organization has been migrated to one of the
new pricing plans
, and you use OpenAPI you will be required to remove the
tier
field in the service creation
POST
request.
The
tier
field has been removed from the service object as we no longer have service tiers.
This will affect the objects returned by the
POST
,
GET
, and
PATCH
service requests. Therefore, any code that consumes these APIs may need to be adjusted to handle these changes.
:::
Rate limits {#rate-limits}
Developers are limited to 100 API keys per organization. Each API key has a
limit of 10 requests over a 10-second window. If you'd like to increase the
number of API keys or requests per 10-second window for your organization,
please contact support@clickhouse.com
Terraform provider {#terraform-provider}
The official ClickHouse Terraform Provider lets you use
Infrastructure as Code
to create predictable, version-controlled configurations to make deployments much
less error-prone.
You can view the Terraform provider docs in the
Terraform registry
.
If you'd like to contribute to the ClickHouse Terraform Provider, you can view
the source
in the GitHub repo
.
:::note
If your organization has been migrated to one of the
new pricing plans
, you will be required to use our
ClickHouse Terraform provider
version 2.0.0 or above. This upgrade is required to handle changes in the
tier
attribute of the service since, after pricing migration, the
tier
field is no longer accepted and references to it should be removed.
You will now also be able to specify the
num_replicas
field as a property of the service resource.
:::
Terraform and OpenAPI New Pricing: Replica Settings Explained {#terraform-and-openapi-new-pricing---replica-settings-explained} | {"source_file": "api-overview.md"} | [
-0.046266064047813416,
-0.001544756698422134,
-0.025601061061024666,
0.021339377388358116,
0.015133125707507133,
-0.04588063806295395,
-0.05230701342225075,
-0.02851502224802971,
-0.0033403534907847643,
0.00914258323609829,
0.03581678494811058,
-0.03207433596253395,
0.059946514666080475,
-... |
6e96d362-2554-499a-b161-df67c7727dbb | Terraform and OpenAPI New Pricing: Replica Settings Explained {#terraform-and-openapi-new-pricing---replica-settings-explained}
The number of replicas each service will be created with defaults to 3 for the Scale and Enterprise tiers, while it defaults to 1 for the Basic tier.
For the Scale and the Enterprise tiers it is possible to adjust it by passing a
numReplicas
field in the service creation request.
The value of the
numReplicas
field must be between 2 and 20 for the first service in a warehouse. Services that are created in an existing warehouse can have a number of replicas as low as 1.
Support {#support}
We recommend visiting
our Slack channel
first to get quick support. If
you'd like additional help or more info about our API and its capabilities,
please contact ClickHouse Support at https://console.clickhouse.cloud/support | {"source_file": "api-overview.md"} | [
-0.0287421066313982,
-0.01692325621843338,
-0.018219927325844765,
0.02694389410316944,
-0.04644971340894699,
-0.041520263999700546,
-0.08880581706762314,
-0.021506119519472122,
0.022429851815104485,
0.07689083367586136,
-0.008318855427205563,
-0.1053098738193512,
0.06269679218530655,
0.016... |
1e0f791b-7cdf-42d6-b9a8-dac5499ae57c | slug: /cloud/manage/postman
sidebar_label: 'Programmatic API access with Postman'
title: 'Programmatic API access with Postman'
description: 'This guide will help you test the ClickHouse Cloud API using Postman'
doc_type: 'guide'
keywords: ['api', 'postman', 'rest api', 'cloud management', 'integration']
import Image from '@theme/IdealImage';
import postman1 from '@site/static/images/cloud/manage/postman/postman1.png';
import postman2 from '@site/static/images/cloud/manage/postman/postman2.png';
import postman3 from '@site/static/images/cloud/manage/postman/postman3.png';
import postman4 from '@site/static/images/cloud/manage/postman/postman4.png';
import postman5 from '@site/static/images/cloud/manage/postman/postman5.png';
import postman6 from '@site/static/images/cloud/manage/postman/postman6.png';
import postman7 from '@site/static/images/cloud/manage/postman/postman7.png';
import postman8 from '@site/static/images/cloud/manage/postman/postman8.png';
import postman9 from '@site/static/images/cloud/manage/postman/postman9.png';
import postman10 from '@site/static/images/cloud/manage/postman/postman10.png';
import postman11 from '@site/static/images/cloud/manage/postman/postman11.png';
import postman12 from '@site/static/images/cloud/manage/postman/postman12.png';
import postman13 from '@site/static/images/cloud/manage/postman/postman13.png';
import postman14 from '@site/static/images/cloud/manage/postman/postman14.png';
import postman15 from '@site/static/images/cloud/manage/postman/postman15.png';
import postman16 from '@site/static/images/cloud/manage/postman/postman16.png';
import postman17 from '@site/static/images/cloud/manage/postman/postman17.png';
This guide will help you test the ClickHouse Cloud API using
Postman
.
The Postman Application is available for use within a web browser or can be downloaded to a desktop.
Create an account {#create-an-account}
Free accounts are available at
https://www.postman.com
.
Create a workspace {#create-a-workspace}
Name your workspace and set the visibility level.
Create a collection {#create-a-collection}
Below "Explore" on the top left Menu click "Import":
A modal will appear:
Enter the API address: "https://api.clickhouse.cloud/v1" and press 'Enter':
Select "Postman Collection" by clicking on the "Import" button:
Interface with the ClickHouse Cloud API spec {#interface-with-the-clickhouse-cloud-api-spec}
The "API spec for ClickHouse Cloud" will now appear within "Collections" (Left Navigation).
Click on "API spec for ClickHouse Cloud." From the middle pain select the 'Authorization' tab:
Set authorization {#set-authorization}
Toggle the dropdown menu to select "Basic Auth":
Enter the Username and Password received when you set up your ClickHouse Cloud API keys:
Enable variables {#enable-variables}
Variables
enable the storage and reuse of values in Postman allowing for easier API testing. | {"source_file": "postman.md"} | [
-0.024881139397621155,
0.04931686073541641,
-0.04708332195878029,
-0.07187528163194656,
-0.012627665884792805,
-0.07388617843389511,
-0.06341147422790527,
-0.01330891065299511,
-0.08593649417161942,
0.07142437994480133,
0.07054506987333298,
-0.03700893372297287,
0.04523683339357376,
-0.055... |
77311efd-078b-4c9d-b77f-dec1446c725a | Enable variables {#enable-variables}
Variables
enable the storage and reuse of values in Postman allowing for easier API testing.
Set the organization ID and Service ID {#set-the-organization-id-and-service-id}
Within the "Collection", click the "Variable" tab in the middle pane (The Base URL will have been set by the earlier API import):
Below
baseURL
click the open field "Add new value", and Substitute your organization ID and service ID:
Test the ClickHouse Cloud API functionalities {#test-the-clickhouse-cloud-api-functionalities}
Test "GET list of available organizations" {#test-get-list-of-available-organizations}
Under the "OpenAPI spec for ClickHouse Cloud", expand the folder > V1 > organizations
Click "GET list of available organizations" and press the blue "Send" button on the right:
The returned results should deliver your organization details with "status": 200. (If you receive a "status" 400 with no organization information your configuration is not correct).
Test "GET organizational details" {#test-get-organizational-details}
Under the
organizationid
folder, navigate to "GET organizational details":
In the middle frame menu under Params an
organizationid
is required.
Edit this value with
orgid
in curly braces
{{orgid}}
(From setting this value earlier a menu will appear with the value):
After pressing the "Save" button, press the blue "Send" button at the top right of the screen.
The returned results should deliver your organization details with "status": 200. (If you receive a "status" 400 with no organization information your configuration is not correct).
Test "GET service details" {#test-get-service-details}
Click "GET service details"
Edit the Values for
organizationid
and
serviceid
with
{{orgid}}
and
{{serviceid}}
respectively.
Press "Save" and then the blue "Send" button on the right.
The returned results should deliver a list of your services and their details with "status": 200. (If you receive a "status" 400 with no service(s) information your configuration is not correct). | {"source_file": "postman.md"} | [
-0.003933926112949848,
0.0064371791668236256,
-0.09918659180402756,
0.04473603889346123,
-0.04468684270977974,
-0.056732676923274994,
-0.05379526689648628,
-0.08997180312871933,
-0.0034694941714406013,
0.010362097062170506,
0.04106905311346054,
-0.08296088129281998,
0.007633765693753958,
-... |
ee273a57-7be4-4427-a238-43f09d2cae78 | title: 'Cloud API'
slug: /cloud/manage/cloud-api
description: 'Landing page for the Cloud API section'
doc_type: 'landing-page'
keywords: ['ClickHouse Cloud', 'cloud API', 'API documentation', 'REST API reference', 'cloud management API']
This section contains reference documentation for Cloud API and contains the following pages:
| Page | Description |
|--------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------|
|
Overview
| Provides an overview of rate limits, Terraform Provider, Swagger (OpenAPI) Endpoint and UI and available support. |
|
Managing API Keys
| Learn more about Cloud's API utilizing OpenAPI that allows you to programmatically manage your account and aspects of your services. |
|
API Reference
| OpenAPI (swagger) reference page. | | {"source_file": "index.md"} | [
-0.027888888493180275,
0.03483255207538605,
0.039476726204156876,
0.02207246795296669,
0.04295288026332855,
-0.0015322706894949079,
-0.03993134945631027,
-0.04643085226416588,
0.036646753549575806,
0.006409011781215668,
-0.015908002853393555,
-0.005413745064288378,
0.050851695239543915,
-0... |
c4e48f8e-ee5a-4092-92db-c86be0cc1271 | sidebar_label: 'Managing API keys'
slug: /cloud/manage/openapi
title: 'Managing API Keys'
description: 'ClickHouse Cloud provides an API utilizing OpenAPI that allows you to programmatically manage your account and aspects of your services.'
doc_type: 'guide'
keywords: ['api', 'openapi', 'rest api', 'documentation', 'cloud management']
import image_01 from '@site/static/images/cloud/manage/openapi1.png';
import image_02 from '@site/static/images/cloud/manage/openapi2.png';
import image_03 from '@site/static/images/cloud/manage/openapi3.png';
import image_04 from '@site/static/images/cloud/manage/openapi4.png';
import image_05 from '@site/static/images/cloud/manage/openapi5.png';
import Image from '@theme/IdealImage';
Managing API keys
ClickHouse Cloud provides an API utilizing OpenAPI that allows you to programmatically manage your account and aspects of your services.
:::note
This document covers the ClickHouse Cloud API. For database API endpoints, please see
Cloud Endpoints API
:::
You can use the
API Keys
tab on the left menu to create and manage your API keys.
The
API Keys
page will initially display a prompt to create your first API key as shown below. After your first key is created, you can create new keys using the
New API Key
button that appears in the top right corner.
To create an API key, specify the key name, permissions for the key, and expiration time, then click
Generate API Key
.
:::note
Permissions align with ClickHouse Cloud
predefined roles
. The developer role has read-only permissions for assigned services and the admin role has full read and write permissions.
:::
:::tip Query API Endpoints
To use API keys with
Query API Endpoints
, set Organization Role to
Member
(minimum) and grant Service Role access to
Query Endpoints
.
:::
The next screen will display your Key ID and Key secret. Copy these values and put them somewhere safe, such as a vault. The values will not be displayed after you leave this screen.
The ClickHouse Cloud API uses
HTTP Basic Authentication
to verify the validity of your API keys. Here is an example of how to use your API keys to send requests to the ClickHouse Cloud API using
curl
:
```bash
$ KEY_ID=mykeyid
$ KEY_SECRET=mykeysecret
$ curl --user $KEY_ID:$KEY_SECRET https://api.clickhouse.cloud/v1/organizations
```
Returning to the
API Keys
page, you will see the key name, last four characters of the Key ID, permissions, status, expiration date, and creator. You are able to edit the key name, permissions, and expiration from this screen. Keys may also be disabled or deleted form this screen.
:::note
Deleting an API key is a permanent action. Any services using the key will immediately lose access to ClickHouse Cloud.
:::
Endpoints {#endpoints}
Refer details on endpoints, refer to the
API reference
.
Use your API Key and API Secret with the base URL
https://api.clickhouse.cloud/v1
. | {"source_file": "openapi.md"} | [
-0.014480939134955406,
0.05029755085706711,
-0.026293959468603134,
0.03411850705742836,
0.0641312375664711,
-0.08674757927656174,
-0.0005699063185602427,
0.010716160759329796,
-0.013282421045005322,
0.028778400272130966,
0.06871125847101212,
-0.015055876225233078,
0.13352784514427185,
-0.0... |
2829f150-2243-43f5-917a-63c5a52b8c1c | sidebar_position: 1
sidebar_label: 'Make Before Break (MBB)'
slug: /cloud/features/mbb
description: 'Page describing Make Before Break (MBB) operations in ClickHouse Cloud'
keywords: ['Make Before Break', 'MBB', 'Scaling', 'ClickHouse Cloud']
title: 'Make Before Break (MBB) operations in ClickHouse Cloud'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import mbb_diagram from '@site/static/images/cloud/features/mbb/vertical_scaling.png';
ClickHouse Cloud performs cluster upgrades and cluster scaling utilizing a
Make Before Break
(MBB) approach.
In this approach, new replicas are added to the cluster before removing old replicas from it.
This is as opposed to the break-first approach, where old replicas would first be removed, before adding new ones.
The MBB approach has several benefits:
* Since capacity is added to the cluster before removal, the
overall cluster capacity does not go down
unlike with the break-first approach. Of course, unplanned events such as node or disk failures etc. can still happen in a cloud environment.
* This approach is especially useful in situations where the cluster is under heavy load as it
prevents existing replicas from being overloaded
as would happen with a break-first approach.
* Because replicas can be added quickly without having to wait to remove replicas first, this approach leads to a
faster, more responsive
scaling experience.
The image below shows how this might happen for a cluster with 3 replicas where the service is scaled vertically:
Overall, MBB leads to a seamless, less disruptive scaling and upgrade experience compared to the break-first approach previously utilized.
With MBB, there are some key behaviors that users need to be aware of:
MBB operations wait for existing workloads to finish on the current replicas before being terminated.
This period is currently set to 1 hour, which means that scaling or upgrades can wait up to one hour for a long-running query on a replica before the replica is removed.
Additionally, if a backup process is running on a replica, it is left to complete before the replica is terminated. | {"source_file": "02_make_before_break.md"} | [
-0.045374345034360886,
-0.04921983554959297,
0.06208128109574318,
0.015659861266613007,
0.0679766982793808,
0.006113207899034023,
-0.06567206233739853,
-0.05597532540559769,
-0.0309547521173954,
0.02545762248337269,
0.029133902862668037,
0.05529456585645676,
0.08878964930772781,
-0.1019795... |
156c3b41-1558-499f-910e-eb445f813450 | Due to the fact that there is a waiting time before a replica is terminated, there can be situations where a cluster might have more than the maximum number of replicas set for the cluster.
For example, you might have a service with 6 total replicas, but with an MBB operation in progress, 3 additional replicas may get added to the cluster leading to a total of 9 replicas, while the older replicas are still serving queries.
This means that for a period of time, the cluster will have more than the desired number of replicas.
Additionally, multiple MBB operations themselves can overlap, leading to replica accumulation. This can happen, for instance, in scenarios where several vertical scaling requests are sent to the cluster via the API.
ClickHouse Cloud has checks in place to restrict the number of replicas that a cluster might accumulate.
With MBB operations, system table data is kept for 30 days. This means every time an MBB operation happens on a cluster, 30 days worth of system table data is replicated from the old replicas to the new ones.
If you are interested in learning more about the mechanics of MBB operations, please look at this
blog post
from the ClickHouse engineering team. | {"source_file": "02_make_before_break.md"} | [
-0.03032437339425087,
-0.09392178058624268,
0.0229283906519413,
-0.0011528772301971912,
-0.0010426733642816544,
-0.0646662712097168,
-0.07197856903076172,
-0.05223705247044563,
0.09586799144744873,
0.03234172612428665,
-0.05788031592965126,
0.05903508886694908,
0.06992451846599579,
-0.0751... |
05f73f87-0058-4972-b85c-9bc57b75a095 | sidebar_position: 1
sidebar_label: 'Automatic scaling'
slug: /manage/scaling
description: 'Configuring automatic scaling in ClickHouse Cloud'
keywords: ['autoscaling', 'auto scaling', 'scaling', 'horizontal', 'vertical', 'bursts']
title: 'Automatic Scaling'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import auto_scaling from '@site/static/images/cloud/manage/AutoScaling.png';
import scaling_patch_request from '@site/static/images/cloud/manage/scaling-patch-request.png';
import scaling_patch_response from '@site/static/images/cloud/manage/scaling-patch-response.png';
import scaling_configure from '@site/static/images/cloud/manage/scaling-configure.png';
import scaling_memory_allocation from '@site/static/images/cloud/manage/scaling-memory-allocation.png';
import ScalePlanFeatureBadge from '@theme/badges/ScalePlanFeatureBadge'
Automatic scaling
Scaling is the ability to adjust available resources to meet client demands. Scale and Enterprise (with standard 1:4 profile) tier services can be scaled horizontally by calling an API programmatically, or changing settings on the UI to adjust system resources. These services can also be
autoscaled
vertically to meet application demands.
:::note
Scale and Enterprise tiers supports both single and multi-replica services, whereas, Basic tier supports only single replica services. Single replica services are meant to be fixed in size and do not allow vertical or horizontal scaling. Users can upgrade to the Scale or Enterprise tier to scale their services.
:::
How scaling works in ClickHouse Cloud {#how-scaling-works-in-clickhouse-cloud}
Currently, ClickHouse Cloud supports vertical autoscaling and manual horizontal scaling for Scale tier services.
For Enterprise tier services scaling works as follows:
Horizontal scaling
: Manual horizontal scaling will be available across all standard and custom profiles on the enterprise tier.
Vertical scaling
:
Standard profiles (1:4) will support vertical autoscaling.
Custom profiles (
highMemory
and
highCPU
) do not support vertical autoscaling or manual vertical scaling. However, these services can be scaled vertically by contacting support. | {"source_file": "01_auto_scaling.md"} | [
-0.015721850097179413,
0.025408346205949783,
0.004959226585924625,
0.029455985873937607,
-0.022443389520049095,
-0.009799472987651825,
-0.0064756725914776325,
0.05818420276045799,
-0.05431308597326279,
0.06484968215227127,
0.06987684220075607,
0.022828079760074615,
0.04887505620718002,
-0.... |
333dd979-ac99-4e2b-b205-1067a0f9bb51 | Custom profiles (
highMemory
and
highCPU
) do not support vertical autoscaling or manual vertical scaling. However, these services can be scaled vertically by contacting support.
:::note
Scaling in ClickHouse Cloud happens in what we call a
"Make Before Break" (MBB)
approach.
This adds one or more replicas of the new size before removing the old replicas, preventing any loss of capacity during scaling operations.
By eliminating the gap between removing existing replicas and adding new ones, MBB creates a more seamless and less disruptive scaling process.
It is especially beneficial in scale-up scenarios, where high resource utilization triggers the need for additional capacity, since removing replicas prematurely would only exacerbate the resource constraints.
As part of this approach, we wait up to an hour to let any existing queries complete on the older replicas before removing them.
This balances the need for existing queries to complete, while at the same time ensuring that older replicas do not linger around for too long.
Please note that as part of this change:
1. Historical system table data is retained for up to a maximum of 30 days as part of scaling events. In addition, any system table data older than December 19, 2024, for services on AWS or GCP and older than January 14, 2025, for services on Azure will not be retained as part of the migration to the new organization tiers.
2. For services utilizing TDE (Transparent Data Encryption) system table data is currently not maintained after MBB operations. We are working on removing this limitation.
:::
Vertical auto scaling {#vertical-auto-scaling}
Scale and Enterprise services support autoscaling based on CPU and memory usage. We constantly monitor the historical usage of a service over a lookback window (spanning the past 30 hours) to make scaling decisions. If the usage rises above or falls below certain thresholds, we scale the service appropriately to match the demand.
For non-MBB services, CPU-based autoscaling kicks in when CPU usage crosses an upper threshold in the range of 50-75% (actual threshold depends on the size of the cluster). At this point, CPU allocation to the cluster is doubled. If CPU usage falls below half of the upper threshold (for instance, to 25% in case of a 50% upper threshold), CPU allocation is halved.
For services already utilizing the MBB scaling approach, scaling up happens at a CPU threshold of 75%, and scale down happens at half of that threshold, or 37.5%.
Memory-based auto-scaling scales the cluster to 125% of the maximum memory usage, or up to 150% if OOM (out of memory) errors are encountered.
The
larger
of the CPU or memory recommendation is picked, and CPU and memory allocated to the service are scaled in lockstep increments of
1
CPU and
4 GiB
memory.
Configuring vertical auto scaling {#configuring-vertical-auto-scaling} | {"source_file": "01_auto_scaling.md"} | [
-0.019166847690939903,
-0.032221365720033646,
0.01341633778065443,
0.009397157467901707,
0.0000324628890666645,
0.0008677380974404514,
-0.05362779274582863,
0.018346091732382774,
-0.07599975913763046,
0.03454170376062393,
-0.016331546008586884,
0.06307020038366318,
0.07889226078987122,
-0.... |
7931b955-5121-42d5-9e58-7810161d435a | Configuring vertical auto scaling {#configuring-vertical-auto-scaling}
The scaling of ClickHouse Cloud Scale or Enterprise services can be adjusted by organization members with the
Admin
role. To configure vertical autoscaling, go to the
Settings
tab for your service and adjust the minimum and maximum memory, along with CPU settings as shown below.
:::note
Single replica services cannot be scaled for all tiers.
:::
Set the
Maximum memory
for your replicas at a higher value than the
Minimum memory
. The service will then scale as needed within those bounds. These settings are also available during the initial service creation flow. Each replica in your service will be allocated the same memory and CPU resources.
You can also choose to set these values the same, essentially "pinning" the service to a specific configuration. Doing so will immediately force scaling to the desired size you picked.
It's important to note that this will disable any auto scaling on the cluster, and your service will not be protected against increases in CPU or memory usage beyond these settings.
:::note
For Enterprise tier services, standard 1:4 profiles will support vertical autoscaling.
Custom profiles will not support vertical autoscaling or manual vertical scaling at launch.
However, these services can be scaled vertically by contacting support.
:::
Manual horizontal scaling {#manual-horizontal-scaling}
You can use ClickHouse Cloud
public APIs
to scale your service by updating the scaling settings for the service or adjust the number of replicas from the cloud console.
Scale
and
Enterprise
tiers also support single-replica services. Services once scaled out, can be scaled back in to a minimum of a single replica. Note that single replica services have reduced availability and are not recommended for production usage.
:::note
Services can scale horizontally to a maximum of 20 replicas. If you need additional replicas, please contact our support team.
:::
Horizontal scaling via API {#horizontal-scaling-via-api}
To horizontally scale a cluster, issue a
PATCH
request via the API to adjust the number of replicas. The screenshots below show an API call to scale out a
3
replica cluster to
6
replicas, and the corresponding response.
PATCH
request to update
numReplicas
Response from
PATCH
request
If you issue a new scaling request or multiple requests in succession, while one is already in progress, the scaling service will ignore the intermediate states and converge on the final replica count.
Horizontal scaling via UI {#horizontal-scaling-via-ui}
To scale a service horizontally from the UI, you can adjust the number of replicas for the service on the
Settings
page.
Service scaling settings from the ClickHouse Cloud console | {"source_file": "01_auto_scaling.md"} | [
0.0045256768353283405,
-0.025809748098254204,
-0.05691350996494293,
-0.014045442454516888,
-0.08940432220697403,
-0.02109459601342678,
-0.032702524214982986,
0.04201868548989296,
-0.027888014912605286,
0.018005220219492912,
0.0008898500818759203,
0.03440384194254875,
0.015422776341438293,
... |
131ea4e8-b4e3-4f96-8045-5df14da8d6ef | To scale a service horizontally from the UI, you can adjust the number of replicas for the service on the
Settings
page.
Service scaling settings from the ClickHouse Cloud console
Once the service has scaled, the metrics dashboard in the cloud console should show the correct allocation to the service. The screenshot below shows the cluster having scaled to total memory of
96 GiB
, which is
6
replicas, each with
16 GiB
memory allocation.
Automatic idling {#automatic-idling}
In the
Settings
page, you can also choose whether or not to allow automatic idling of your service when it is inactive as shown in the image above (i.e. when the service is not executing any user-submitted queries). Automatic idling reduces the cost of your service, as you are not billed for compute resources when the service is paused.
:::note
In certain special cases, for instance when a service has a high number of parts, the service will not be idled automatically.
The service may enter an idle state where it suspends refreshes of
refreshable materialized views
, consumption from
S3Queue
, and scheduling of new merges. Existing merge operations will complete before the service transitions to the idle state. To ensure continuous operation of refreshable materialized views and S3Queue consumption, disable the idle state functionality.
:::
:::danger When not to use automatic idling
Use automatic idling only if your use case can handle a delay before responding to queries, because when a service is paused, connections to the service will time out. Automatic idling is ideal for services that are used infrequently and where a delay can be tolerated. It is not recommended for services that power customer-facing features that are used frequently.
:::
Handling spikes in workload {#handling-bursty-workloads}
If you have an upcoming expected spike in your workload, you can use the
ClickHouse Cloud API
to
preemptively scale up your service to handle the spike and scale it down once
the demand subsides.
To understand the current CPU cores and memory in use for
each of your replicas, you can run the query below:
sql
SELECT *
FROM clusterAllReplicas('default', view(
SELECT
hostname() AS server,
anyIf(value, metric = 'CGroupMaxCPU') AS cpu_cores,
formatReadableSize(anyIf(value, metric = 'CGroupMemoryTotal')) AS memory
FROM system.asynchronous_metrics
))
ORDER BY server ASC
SETTINGS skip_unavailable_shards = 1 | {"source_file": "01_auto_scaling.md"} | [
-0.007772428449243307,
-0.02840985730290413,
-0.06291364133358002,
0.02971186675131321,
-0.03555848449468613,
-0.013496034778654575,
0.00740978866815567,
-0.008250686340034008,
0.041208282113075256,
0.0492284819483757,
-0.018818682059645653,
0.04652799665927887,
0.045319609344005585,
-0.00... |
5dc302cb-7e2e-4ee4-8e9f-0c74113d1263 | slug: /cloud/get-started/cloud/resource-tour
title: 'Resource tour'
description: 'Overview of ClickHouse Cloud documentation resources for query optimization, scaling strategies, monitoring, and best practices'
keywords: ['clickhouse cloud']
hide_title: true
doc_type: 'guide'
import TableOfContentsBestPractices from '@site/docs/best-practices/_snippets/_table_of_contents.md';
import TableOfContentsOptimizationAndPerformance from '@site/docs/guides/best-practices/_snippets/_performance_optimizations_table_of_contents.md';
import TableOfContentsSecurity from '@site/docs/cloud/_snippets/_security_table_of_contents.md';
Resource tour
This article is intended to provide you with an overview of the resources available
to you in the docs to learn how to get the most out of your ClickHouse Cloud deployment.
Explore resource organised by the following topics:
Query optimization techniques and performance tuning
Monitoring
Security best practices and compliance features
Cost optimization and billing
Before diving into more specific topics, we recommend you start with our general
ClickHouse best practice guides which cover general best practices to follow when
using ClickHouse:
Query optimization techniques and performance tuning {#query-optimization}
Monitoring {#monitoring}
| Page | Description |
|----------------------------------------------------------------------------|-------------------------------------------------------------------------------|
|
Advanced dashboard
| Use the built in advanced dashboard to monitor service health and performance |
|
Prometheus integration
| Use Prometheus to monitor Cloud services |
|
Cloud Monitoring Capabilities
| Get an overview of built in monitoring capabilities and integration options |
Security {#security}
Cost optimization and billing {#cost-optimization}
| Page | Description |
|-----------------------------------------------------|-----------------------------------------------------------------------------------------------------------|
|
Data transfer
| Understand how ClickHouse Cloud meters data transferred ingress and egress |
|
Notifications
| Set up notifications for your ClickHouse Cloud service. For example, when credit usage passes a threshold | | {"source_file": "resource_tour.md"} | [
-0.0011215745471417904,
0.0075288270600140095,
-0.05876474827528,
0.016820603981614113,
0.07477278262376785,
-0.029975902289152145,
0.07981789857149124,
0.006034003105014563,
-0.05577446520328522,
0.07775348424911499,
0.004313772544264793,
0.017927492037415504,
0.09017562121152878,
-0.0384... |
0f267ff9-314f-4ea8-ad7c-d4385eef0ead | slug: /cloud/overview
title: 'Introduction'
description: 'Learn what ClickHouse Cloud is, its benefits over open-source, and key features of the fully managed analytics platform'
keywords: ['clickhouse cloud', 'what is clickhouse cloud', 'clickhouse cloud overview', 'clickhouse cloud features']
hide_title: true
doc_type: 'guide'
What is ClickHouse Cloud? {#what-is-clickhouse-cloud}
ClickHouse Cloud is a fully managed cloud service created by the original creators
of ClickHouse, the fastest and most popular open-source columnar online analytical
processing database.
With Cloud, infrastructure, maintenance, scaling, and operations are taken care of
for you, so that you can focus on what matters most to you, which is building value
for your organization and your customers faster.
Benefits of ClickHouse Cloud {#benefits-of-clickhouse-cloud}
ClickHouse Cloud offers several major benefits over the open-source version:
Fast time to value
: Start building instantly without having to size and scale your cluster.
Seamless scaling
: Automatic scaling adjusts to variable workloads so you don't have to over-provision for peak usage.
Serverless operations
: Sit back while we take care of sizing, scaling, security, reliability, and upgrades.
Transparent pricing
: Pay only for what you use, with resource reservations and scaling controls.
Total cost of ownership
: Best price / performance ratio and low administrative overhead.
Broad ecosystem
: Bring your favorite data connectors, visualization tools, SQL and language clients with you.
OSS vs ClickHouse Cloud comparison {#oss-vs-clickhouse-cloud} | {"source_file": "01_what_is.md"} | [
-0.005358563270419836,
-0.037261202931404114,
-0.034425098448991776,
0.04008636251091957,
0.02847868576645851,
-0.050584789365530014,
-0.000532064528670162,
-0.00821769330650568,
-0.028507962822914124,
0.05344506725668907,
0.005106950178742409,
0.020880913361907005,
0.0121894720941782,
-0.... |
b6b5a6e7-aa6f-46dd-a8c6-0e5a101da45b | | Feature | Benefits | OSS ClickHouse | ClickHouse Cloud |
|--------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|-------------------|
|
Deployment modes
| ClickHouse provides the flexibility to self-manage with open-source or deploy in the cloud. Use ClickHouse local for local files without a server or chDB to embed ClickHouse directly into your application. | β
| β
|
|
Storage
| As an open-source and cloud-hosted product, ClickHouse can be deployed in both shared-disk and shared-nothing architectures. | β
| β
|
|
Monitoring and alerting
| Monitoring and alerting about the status of your services is critical to ensuring optimal performance and a proactive approach to detect and triage potential issues. | β
| β
|
|
ClickPipes
| ClickPipes is ClickHouse's managed ingestion pipeline that allows you to seamlessly connect your external data sources like databases, APIs, and streaming services into ClickHouse Cloud, eliminating the need for managing pipelines, custom jobs, or ETL processes. It supports workloads of all sizes. | β | β
|
|
Pre-built integrations
| ClickHouse provides pre-built integrations that connect ClickHouse to popular tools and services such as data lakes, SQL and language clients, visualization libraries, and more. | β | β
|
|
SQL console
| The SQL console offers a fast, intuitive way to connect, explore, and query ClickHouse databases, featuring a slick caption, query interface, data import tools, visualizations, collaboration features, and GenAI-powered SQL assistance. | β | β
|
|
Compliance | {"source_file": "01_what_is.md"} | [
0.014088477939367294,
-0.010374323464930058,
0.02542426623404026,
-0.019206006079912186,
0.04413518309593201,
-0.004848169162869453,
0.027431456372141838,
-0.008515806868672371,
-0.059329815208911896,
-0.033000554889440536,
0.0660785436630249,
0.022453362122178078,
0.005882298108190298,
-0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.