id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
3dde04a9-f727-4939-972d-85c2893bb9d6 | UK property prices
example dataset tutorial,
except for the
ON CLUSTER
clause and use of the
ReplicatedMergeTree
engine.
The
ON CLUSTER
clause is designed for distributed execution of DDL (Data Definition Language)
queries such as
CREATE
,
DROP
,
ALTER
, and
RENAME
, ensuring that these
schema changes are applied across all nodes in a cluster.
The
ReplicatedMergeTree
engine works just as the ordinary
MergeTree
table engine, but it will also replicate the data.
You can run the query below from either
clickhouse-01
or
clickhouse-02
client
to confirm that the table has been created across the cluster:
sql title="Query"
SHOW TABLES IN uk;
response title="Response"
ββnameβββββββββββββββββ
1. β uk_price_paid. β
βββββββββββββββββββββββ
Insert data {#inserting-data}
As the data set is large and takes a few minutes to completely ingest, we will
insert only a small subset to begin with.
Insert a smaller subset of the data using the query below from
clickhouse-01
:
sql
INSERT INTO uk.uk_price_paid_local
SELECT
toUInt32(price_string) AS price,
parseDateTimeBestEffortUS(time) AS date,
splitByChar(' ', postcode)[1] AS postcode1,
splitByChar(' ', postcode)[2] AS postcode2,
transform(a, ['T', 'S', 'D', 'F', 'O'], ['terraced', 'semi-detached', 'detached', 'flat', 'other']) AS type,
b = 'Y' AS is_new,
transform(c, ['F', 'L', 'U'], ['freehold', 'leasehold', 'unknown']) AS duration,
addr1,
addr2,
street,
locality,
town,
district,
county
FROM url(
'http://prod1.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv',
'CSV',
'uuid_string String,
price_string String,
time String,
postcode String,
a String,
b String,
c String,
addr1 String,
addr2 String,
street String,
locality String,
town String,
district String,
county String,
d String,
e String'
) LIMIT 10000
SETTINGS max_http_get_redirects=10;
Notice that the data is completely replicated on each host:
```sql
-- clickhouse-01
SELECT count(*)
FROM uk.uk_price_paid_local
-- ββcount()ββ
-- 1.β 10000 β
-- βββββββββββ
-- clickhouse-02
SELECT count(*)
FROM uk.uk_price_paid_local
-- ββcount()ββ
-- 1.β 10000 β
-- βββββββββββ
```
To demonstrate what happens when one of the hosts fails, create a simple test database
and test table from either of the hosts:
sql
CREATE DATABASE IF NOT EXISTS test ON CLUSTER cluster_1S_2R;
CREATE TABLE test.test_table ON CLUSTER cluster_1S_2R
(
`id` UInt64,
`name` String
)
ENGINE = ReplicatedMergeTree
ORDER BY id;
As with the
uk_price_paid
table, we can insert data from either host:
sql
INSERT INTO test.test_table (id, name) VALUES (1, 'Clicky McClickface');
But what will happen if one of the hosts is down? To simulate this, stop
clickhouse-01
by running:
bash
docker stop clickhouse-01
Check that the host is down by running: | {"source_file": "01_1_shard_2_replicas.md"} | [
-0.00006858345295768231,
-0.05107551068067551,
0.040606871247291565,
0.07797925174236298,
-0.037762198597192764,
-0.09226353466510773,
0.0006866400362923741,
-0.053612176328897476,
-0.036544155329465866,
0.03980632126331329,
0.025337887927889824,
-0.1268797069787979,
0.07235956192016602,
-... |
cb1f2a98-3a46-4716-a44d-21f7a0bbcf98 | But what will happen if one of the hosts is down? To simulate this, stop
clickhouse-01
by running:
bash
docker stop clickhouse-01
Check that the host is down by running:
bash
docker-compose ps
response title="Response"
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
clickhouse-02 clickhouse/clickhouse-server:latest "/entrypoint.sh" clickhouse-02 X minutes ago Up X minutes 127.0.0.1:8124->8123/tcp, 127.0.0.1:9001->9000/tcp
clickhouse-keeper-01 clickhouse/clickhouse-keeper:latest-alpine "/entrypoint.sh" clickhouse-keeper-01 X minutes ago Up X minutes 127.0.0.1:9181->9181/tcp
clickhouse-keeper-02 clickhouse/clickhouse-keeper:latest-alpine "/entrypoint.sh" clickhouse-keeper-02 X minutes ago Up X minutes 127.0.0.1:9182->9181/tcp
clickhouse-keeper-03 clickhouse/clickhouse-keeper:latest-alpine "/entrypoint.sh" clickhouse-keeper-03 X minutes ago Up X minutes 127.0.0.1:9183->9181/tcp
With
clickhouse-01
now down, insert another row of data into the test table
and query the table:
sql
INSERT INTO test.test_table (id, name) VALUES (2, 'Alexey Milovidov');
SELECT * FROM test.test_table;
response title="Response"
ββidββ¬βnameββββββββββββββββ
1. β 1 β Clicky McClickface β
2. β 2 β Alexey Milovidov β
ββββββ΄βββββββββββββββββββββ
Now restart
clickhouse-01
with the following command (you can run
docker-compose ps
again after to confirm):
sql
docker start clickhouse-01
Query the test table again from
clickhouse-01
after running
docker exec -it clickhouse-01 clickhouse-client
:
sql title="Query"
SELECT * FROM test.test_table
response title="Response"
ββidββ¬βnameββββββββββββββββ
1. β 1 β Clicky McClickface β
2. β 2 β Alexey Milovidov β
ββββββ΄βββββββββββββββββββββ
If at this stage you would like to ingest the full UK property price dataset
to play around with, you can run the following queries to do so: | {"source_file": "01_1_shard_2_replicas.md"} | [
0.029537230730056763,
0.04086853936314583,
-0.007995134219527245,
0.0012902736198157072,
0.013040668331086636,
-0.09466055035591125,
-0.018534399569034576,
-0.06980346143245697,
0.057401299476623535,
0.02181464619934559,
-0.07318706065416336,
0.021894145756959915,
0.030435217544436455,
-0.... |
8806c455-0fba-4e26-b2ac-b86f8c4e1ec0 | If at this stage you would like to ingest the full UK property price dataset
to play around with, you can run the following queries to do so:
sql
TRUNCATE TABLE uk.uk_price_paid_local ON CLUSTER cluster_1S_2R;
INSERT INTO uk.uk_price_paid_local
SELECT
toUInt32(price_string) AS price,
parseDateTimeBestEffortUS(time) AS date,
splitByChar(' ', postcode)[1] AS postcode1,
splitByChar(' ', postcode)[2] AS postcode2,
transform(a, ['T', 'S', 'D', 'F', 'O'], ['terraced', 'semi-detached', 'detached', 'flat', 'other']) AS type,
b = 'Y' AS is_new,
transform(c, ['F', 'L', 'U'], ['freehold', 'leasehold', 'unknown']) AS duration,
addr1,
addr2,
street,
locality,
town,
district,
county
FROM url(
'http://prod1.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv',
'CSV',
'uuid_string String,
price_string String,
time String,
postcode String,
a String,
b String,
c String,
addr1 String,
addr2 String,
street String,
locality String,
town String,
district String,
county String,
d String,
e String'
) SETTINGS max_http_get_redirects=10;
Query the table from
clickhouse-02
or
clickhouse-01
:
sql title="Query"
SELECT count(*) FROM uk.uk_price_paid_local;
response title="Response"
βββcount()ββ
1. β 30212555 β -- 30.21 million
ββββββββββββ
Conclusion {#conclusion}
The advantage of this cluster topology is that with two replicas,
your data exists on two separate hosts. If one host fails, the other replica
continues serving data without any loss. This eliminates single points of
failure at the storage level.
When one host goes down, the remaining replica is still able to:
- Handle read queries without interruption
- Accept new writes (depending on your consistency settings)
- Maintain service availability for applications
When the failed host comes back online, it is able to:
- Automatically sync missing data from the healthy replica
- Resume normal operations without manual intervention
- Restore full redundancy quickly
In the next example, we'll look at how to set up a cluster with two shards but
only one replica. | {"source_file": "01_1_shard_2_replicas.md"} | [
0.10348768532276154,
-0.043815866112709045,
0.05378120392560959,
0.08978385478258133,
-0.05039219558238983,
-0.0444413498044014,
0.021866269409656525,
-0.049626465886831284,
-0.13869339227676392,
0.03184777498245239,
0.04338708147406578,
-0.1557089239358902,
-0.0016279519768431783,
-0.0415... |
1e1cbdc2-68de-4a4f-b9c3-f486b65c4953 | slug: /architecture/horizontal-scaling
sidebar_label: 'Scaling'
sidebar_position: 10
title: 'Scaling'
description: 'Page describing an example architecture designed to provide scalability'
doc_type: 'guide'
keywords: ['sharding', 'horizontal scaling', 'distributed data', 'cluster setup', 'data distribution']
import Image from '@theme/IdealImage';
import ReplicationShardingTerminology from '@site/docs/_snippets/_replication-sharding-terminology.md';
import ShardingArchitecture from '@site/static/images/deployment-guides/replication-sharding-examples/sharding.png';
import ConfigFileNote from '@site/docs/_snippets/_config-files.md';
import KeeperConfigFileNote from '@site/docs/_snippets/_keeper-config-files.md';
import ConfigExplanation from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_config_explanation.mdx';
import ListenHost from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_listen_host.mdx';
import ServerParameterTable from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_server_parameter_table.mdx';
import KeeperConfig from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_keeper_config.mdx';
import KeeperConfigExplanation from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_keeper_explanation.mdx';
import VerifyKeeperStatus from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_verify_keeper_using_mntr.mdx';
import DedicatedKeeperServers from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_dedicated_keeper_servers.mdx';
import ExampleFiles from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_working_example.mdx';
import CloudTip from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_cloud_tip.mdx';
In this example, you'll learn how to set up a simple ClickHouse cluster which
scales. There are five servers configured. Two are used to shard the data.
The other three servers are used for coordination.
The architecture of the cluster you will be setting up is shown below:
Prerequisites {#pre-requisites}
You've set up a
local ClickHouse server
before
You are familiar with basic configuration concepts of ClickHouse such as
configuration files
You have docker installed on your machine
Set up directory structure and test environment {#set-up}
In this tutorial, you will use
Docker compose
to
set up the ClickHouse cluster. This setup could be modified to work
for separate local machines, virtual machines or cloud instances as well.
Run the following commands to set up the directory structure for this example:
```bash
mkdir cluster_2S_1R
cd cluster_2S_1R
Create clickhouse-keeper directories
for i in {01..03}; do
mkdir -p fs/volumes/clickhouse-keeper-${i}/etc/clickhouse-keeper
done
Create clickhouse-server directories | {"source_file": "02_2_shards_1_replica.md"} | [
0.027733922004699707,
0.026502737775444984,
-0.05214530974626541,
-0.017141809687018394,
-0.02499917708337307,
-0.02183830365538597,
-0.08475833386182785,
0.03560444340109825,
-0.05845244601368904,
0.028787832707166672,
0.06430644541978836,
0.009314185008406639,
0.12209729850292206,
-0.029... |
5e8700df-c89a-4318-9a00-ff7900409671 | Create clickhouse-keeper directories
for i in {01..03}; do
mkdir -p fs/volumes/clickhouse-keeper-${i}/etc/clickhouse-keeper
done
Create clickhouse-server directories
for i in {01..02}; do
mkdir -p fs/volumes/clickhouse-${i}/etc/clickhouse-server
done
```
Add the following
docker-compose.yml
file to the
clickhouse-cluster
directory: | {"source_file": "02_2_shards_1_replica.md"} | [
0.06145862117409706,
-0.05021229386329651,
-0.010218771174550056,
-0.039666470140218735,
0.0020987330935895443,
-0.03191489353775978,
0.012569793500006199,
-0.041467439383268356,
-0.00012396277452353388,
0.0025953196454793215,
0.01739252172410488,
-0.05002659559249878,
0.026039952412247658,
... |
2cf8e929-0788-43ce-83e6-5af9b3f7ac41 | for i in {01..02}; do
mkdir -p fs/volumes/clickhouse-${i}/etc/clickhouse-server
done
```
Add the following
docker-compose.yml
file to the
clickhouse-cluster
directory:
yaml title="docker-compose.yml"
version: '3.8'
services:
clickhouse-01:
image: "clickhouse/clickhouse-server:latest"
user: "101:101"
container_name: clickhouse-01
hostname: clickhouse-01
networks:
cluster_2S_1R:
ipv4_address: 192.168.7.1
volumes:
- ${PWD}/fs/volumes/clickhouse-01/etc/clickhouse-server/config.d/config.xml:/etc/clickhouse-server/config.d/config.xml
- ${PWD}/fs/volumes/clickhouse-01/etc/clickhouse-server/users.d/users.xml:/etc/clickhouse-server/users.d/users.xml
ports:
- "127.0.0.1:8123:8123"
- "127.0.0.1:9000:9000"
depends_on:
- clickhouse-keeper-01
- clickhouse-keeper-02
- clickhouse-keeper-03
clickhouse-02:
image: "clickhouse/clickhouse-server:latest"
user: "101:101"
container_name: clickhouse-02
hostname: clickhouse-02
networks:
cluster_2S_1R:
ipv4_address: 192.168.7.2
volumes:
- ${PWD}/fs/volumes/clickhouse-02/etc/clickhouse-server/config.d/config.xml:/etc/clickhouse-server/config.d/config.xml
- ${PWD}/fs/volumes/clickhouse-02/etc/clickhouse-server/users.d/users.xml:/etc/clickhouse-server/users.d/users.xml
ports:
- "127.0.0.1:8124:8123"
- "127.0.0.1:9001:9000"
depends_on:
- clickhouse-keeper-01
- clickhouse-keeper-02
- clickhouse-keeper-03
clickhouse-keeper-01:
image: "clickhouse/clickhouse-keeper:latest-alpine"
user: "101:101"
container_name: clickhouse-keeper-01
hostname: clickhouse-keeper-01
networks:
cluster_2S_1R:
ipv4_address: 192.168.7.5
volumes:
- ${PWD}/fs/volumes/clickhouse-keeper-01/etc/clickhouse-keeper/keeper_config.xml:/etc/clickhouse-keeper/keeper_config.xml
ports:
- "127.0.0.1:9181:9181"
clickhouse-keeper-02:
image: "clickhouse/clickhouse-keeper:latest-alpine"
user: "101:101"
container_name: clickhouse-keeper-02
hostname: clickhouse-keeper-02
networks:
cluster_2S_1R:
ipv4_address: 192.168.7.6
volumes:
- ${PWD}/fs/volumes/clickhouse-keeper-02/etc/clickhouse-keeper/keeper_config.xml:/etc/clickhouse-keeper/keeper_config.xml
ports:
- "127.0.0.1:9182:9181"
clickhouse-keeper-03:
image: "clickhouse/clickhouse-keeper:latest-alpine"
user: "101:101"
container_name: clickhouse-keeper-03
hostname: clickhouse-keeper-03
networks:
cluster_2S_1R:
ipv4_address: 192.168.7.7
volumes:
- ${PWD}/fs/volumes/clickhouse-keeper-03/etc/clickhouse-keeper/keeper_config.xml:/etc/clickhouse-keeper/keeper_config.xml
ports:
- "127.0.0.1:9183:9181"
networks:
cluster_2S_1R:
driver: bridge
ipam:
config:
- subnet: 192.168.7.0/24
gateway: 192.168.7.254 | {"source_file": "02_2_shards_1_replica.md"} | [
0.06350291520357132,
-0.05929363891482353,
-0.026999879628419876,
-0.02577812224626541,
-0.025687964633107185,
-0.039582133293151855,
-0.005127746611833572,
-0.04871656000614166,
-0.03466983139514923,
-0.008745170198380947,
0.02683415450155735,
-0.03920625522732735,
-0.008266160264611244,
... |
62e7f0c6-23fd-4117-8a1b-ebaa043eb54c | Create the following sub-directories and files:
bash
for i in {01..02}; do
mkdir -p fs/volumes/clickhouse-${i}/etc/clickhouse-server/config.d
mkdir -p fs/volumes/clickhouse-${i}/etc/clickhouse-server/users.d
touch fs/volumes/clickhouse-${i}/etc/clickhouse-server/config.d/config.xml
touch fs/volumes/clickhouse-${i}/etc/clickhouse-server/users.d/users.xml
done
Configure ClickHouse nodes {#configure-clickhouse-servers}
Server setup {#server-setup}
Now modify each empty configuration file
config.xml
located at
fs/volumes/clickhouse-{}/etc/clickhouse-server/config.d
. The lines which are
highlighted below need to be changed to be specific to each node:
xml
<clickhouse replace="true">
<logger>
<level>debug</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>1000M</size>
<count>3</count>
</logger>
<!--highlight-next-line-->
<display_name>cluster_2S_1R node 1</display_name>
<listen_host>0.0.0.0</listen_host>
<http_port>8123</http_port>
<tcp_port>9000</tcp_port>
<user_directories>
<users_xml>
<path>users.xml</path>
</users_xml>
<local_directory>
<path>/var/lib/clickhouse/access/</path>
</local_directory>
</user_directories>
<distributed_ddl>
<path>/clickhouse/task_queue/ddl</path>
</distributed_ddl>
<remote_servers>
<cluster_2S_1R>
<shard>
<replica>
<host>clickhouse-01</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>clickhouse-02</host>
<port>9000</port>
</replica>
</shard>
</cluster_2S_1R>
</remote_servers>
<zookeeper>
<node>
<host>clickhouse-keeper-01</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-02</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-03</host>
<port>9181</port>
</node>
</zookeeper>
<!--highlight-start-->
<macros>
<shard>01</shard>
<replica>01</replica>
</macros>
<!--highlight-end-->
</clickhouse> | {"source_file": "02_2_shards_1_replica.md"} | [
0.037050750106573105,
-0.06561332941055298,
0.0013339982833713293,
-0.029198141768574715,
-0.02641979418694973,
-0.05981382355093956,
0.061453450471162796,
-0.010192080400884151,
-0.03641747683286667,
0.0034701910335570574,
0.04828159883618355,
-0.013642208650708199,
0.047137174755334854,
... |
69f5d85f-34a9-4b14-ab70-2396296abd78 | | Directory | File |
|-----------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
fs/volumes/clickhouse-01/etc/clickhouse-server/config.d
|
config.xml
|
|
fs/volumes/clickhouse-02/etc/clickhouse-server/config.d
|
config.xml
|
Each section of the above configuration file is explained in more detail below.
Networking and logging {#networking}
Logging is defined in the
<logger>
block. This example configuration gives
you a debug log that will roll over at 1000M three times:
xml
<logger>
<level>debug</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>1000M</size>
<count>3</count>
</logger>
For more information on logging configuration, see the comments included in the
default ClickHouse
configuration file
.
Cluster configuration {#cluster-configuration}
Configuration for the cluster is set up in the
<remote_servers>
block.
Here the cluster name
cluster_2S_1R
is defined.
The
<cluster_2S_1R></cluster_2S_1R>
block defines the layout of the cluster,
using the
<shard></shard>
and
<replica></replica>
settings, and acts as a
template for distributed DDL queries, which are queries that execute across the
cluster using the
ON CLUSTER
clause. By default, distributed DDL queries
are allowed, but can also be turned off with setting
allow_distributed_ddl_queries
.
internal_replication
is left set to false by default since only one replica per shard.
xml
<remote_servers>
<cluster_2S_1R>
<shard>
<replica>
<host>clickhouse-01</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>clickhouse-02</host>
<port>9000</port>
</replica>
</shard>
</cluster_2S_1R>
</remote_servers>
Keeper configuration {#keeper-config-explanation}
The
<ZooKeeper>
section tells ClickHouse where ClickHouse Keeper (or ZooKeeper) is running.
As we are using a ClickHouse Keeper cluster, each
<node>
of the cluster needs to be specified,
along with its hostname and port number using the
<host>
and
<port>
tags respectively.
Set up of ClickHouse Keeper is explained in the next step of the tutorial. | {"source_file": "02_2_shards_1_replica.md"} | [
-0.048409972339868546,
0.05409681424498558,
-0.09214160591363907,
-0.02916708216071129,
-0.02550552226603031,
-0.020091354846954346,
0.038994669914245605,
0.08374205231666565,
0.015796713531017303,
-0.004618654493242502,
0.11647093296051025,
0.04473418369889259,
-0.010443735867738724,
-0.0... |
57617411-c839-46f1-b77b-65b0e745edb4 | Set up of ClickHouse Keeper is explained in the next step of the tutorial.
xml
<zookeeper>
<node>
<host>clickhouse-keeper-01</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-02</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-03</host>
<port>9181</port>
</node>
</zookeeper>
:::note
Although it is possible to run ClickHouse Keeper on the same server as ClickHouse Server,
in production environments we strongly recommend that ClickHouse Keeper runs on dedicated hosts.
:::
Macros configuration {#macros-config-explanation}
Additionally, the
<macros>
section is used to define parameter substitutions for
replicated tables. These are listed in
system.macros
and allow using substitutions
like
{shard}
and
{replica}
in queries.
xml
<macros>
<shard>01</shard>
<replica>01</replica>
</macros>
:::note
These will be defined uniquely depending on the layout of the cluster.
:::
User configuration {#user-config}
Now modify each empty configuration file
users.xml
located at
fs/volumes/clickhouse-{}/etc/clickhouse-server/users.d
with the following:
```xml title="/users.d/users.xml"
10000000000
0
in_order
1
1
default
::/0
default
1
1
1
1
3600
0
0
0
0
0
```
| Directory | File |
|-----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
fs/volumes/clickhouse-01/etc/clickhouse-server/users.d
|
users.xml
|
|
fs/volumes/clickhouse-02/etc/clickhouse-server/users.d
|
users.xml
|
In this example, the default user is configured without a password for simplicity.
In practice, this is discouraged.
:::note
In this example, each
users.xml
file is identical for all nodes in the cluster.
:::
Configure ClickHouse Keeper {#configure-clickhouse-keeper-nodes}
Keeper setup {#configuration-explanation} | {"source_file": "02_2_shards_1_replica.md"} | [
-0.012782174162566662,
-0.04584076255559921,
-0.05685758218169212,
-0.006588127929717302,
-0.026129351928830147,
-0.07984348386526108,
0.009311539120972157,
-0.05785420536994934,
-0.031482767313718796,
0.05192078649997711,
0.014058783650398254,
-0.05637841671705246,
0.10811731219291687,
-0... |
419bd001-8699-412e-a0da-8166fdb58e2c | Configure ClickHouse Keeper {#configure-clickhouse-keeper-nodes}
Keeper setup {#configuration-explanation}
| Directory | File |
|------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
fs/volumes/clickhouse-keeper-01/etc/clickhouse-keeper
|
keeper_config.xml
|
|
fs/volumes/clickhouse-keeper-02/etc/clickhouse-keeper
|
keeper_config.xml
|
|
fs/volumes/clickhouse-keeper-03/etc/clickhouse-keeper
|
keeper_config.xml
|
Test the setup {#test-the-setup}
Make sure that docker is running on your machine.
Start the cluster using the
docker-compose up
command from the root of the
cluster_2S_1R
directory:
bash
docker-compose up -d
You should see docker begin to pull the ClickHouse and Keeper images,
and then start the containers:
bash
[+] Running 6/6
β Network cluster_2s_1r_default Created
β Container clickhouse-keeper-03 Started
β Container clickhouse-keeper-02 Started
β Container clickhouse-keeper-01 Started
β Container clickhouse-01 Started
β Container clickhouse-02 Started
To verify that the cluster is running, connect to either
clickhouse-01
or
clickhouse-02
and run the
following query. The command to connect to the first node is shown:
```bash
Connect to any node
docker exec -it clickhouse-01 clickhouse-client
```
If successful, you will see the ClickHouse client prompt:
response
cluster_2S_1R node 1 :)
Run the following query to check what cluster topologies are defined for which
hosts:
sql title="Query"
SELECT
cluster,
shard_num,
replica_num,
host_name,
port
FROM system.clusters;
response title="Response"
ββclusterββββββββ¬βshard_numββ¬βreplica_numββ¬βhost_nameββββββ¬βportββ
1. β cluster_2S_1R β 1 β 1 β clickhouse-01 β 9000 β
2. β cluster_2S_1R β 2 β 1 β clickhouse-02 β 9000 β
3. β default β 1 β 1 β localhost β 9000 β
βββββββββββββββββ΄ββββββββββββ΄ββββββββββββββ΄ββββββββββββββββ΄βββββββ
Run the following query to check the status of the ClickHouse Keeper cluster:
sql title="Query"
SELECT *
FROM system.zookeeper
WHERE path IN ('/', '/clickhouse')
response title="Response"
ββnameββββββββ¬βvalueββ¬βpathβββββββββ
1. β task_queue β β /clickhouse β
2. β sessions β β /clickhouse β
3. β clickhouse β β / β
4. β keeper β β / β
ββββββββββββββ΄ββββββββ΄ββββββββββββββ | {"source_file": "02_2_shards_1_replica.md"} | [
0.013806043192744255,
-0.0034335097298026085,
-0.07721167802810669,
-0.025163227692246437,
0.02663521096110344,
-0.07519824802875519,
0.01741454377770424,
-0.033306483179330826,
-0.02359217032790184,
0.018202602863311768,
0.07166115939617157,
-0.01752799190580845,
0.014089713804423809,
-0.... |
c970ab2e-8ac4-470b-8c65-284f788d6a4e | With this, you have successfully set up a ClickHouse cluster with a single shard and two replicas.
In the next step, you will create a table in the cluster.
Create a database {#creating-a-database}
Now that you have verified the cluster is correctly setup and is running, you
will be recreating the same table as the one used in the
UK property prices
example dataset tutorial. It consists of around 30 million rows of prices paid
for real-estate property in England and Wales since 1995.
Connect to the client of each host by running each of the following commands from separate terminal
tabs or windows:
bash
docker exec -it clickhouse-01 clickhouse-client
docker exec -it clickhouse-02 clickhouse-client
You can run the query below from clickhouse-client of each host to confirm that
there are no databases created yet, apart from the default ones:
sql title="Query"
SHOW DATABASES;
response title="Response"
ββnameββββββββββββββββ
1. β INFORMATION_SCHEMA β
2. β default β
3. β information_schema β
4. β system β
ββββββββββββββββββββββ
From the
clickhouse-01
client run the following
distributed
DDL query using the
ON CLUSTER
clause to create a new database called
uk
:
sql
CREATE DATABASE IF NOT EXISTS uk
-- highlight-next-line
ON CLUSTER cluster_2S_1R;
You can again run the same query as before from the client of each host
to confirm that the database has been created across the cluster despite running
the query only
clickhouse-01
:
sql
SHOW DATABASES;
```response
ββnameββββββββββββββββ
1. β INFORMATION_SCHEMA β
2. β default β
3. β information_schema β
4. β system β
highlight-next-line
β uk β
ββββββββββββββββββββββ
```
Create a table on the cluster {#creating-a-table}
Now that the database has been created, you will create a table.
Run the following query from any of the host clients:
sql
CREATE TABLE IF NOT EXISTS uk.uk_price_paid_local
--highlight-next-line
ON CLUSTER cluster_2S_1R
(
price UInt32,
date Date,
postcode1 LowCardinality(String),
postcode2 LowCardinality(String),
type Enum8('terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4, 'other' = 0),
is_new UInt8,
duration Enum8('freehold' = 1, 'leasehold' = 2, 'unknown' = 0),
addr1 String,
addr2 String,
street LowCardinality(String),
locality LowCardinality(String),
town LowCardinality(String),
district LowCardinality(String),
county LowCardinality(String)
)
ENGINE = MergeTree
ORDER BY (postcode1, postcode2, addr1, addr2);
Notice that it is identical to the query used in the original
CREATE
statement of the
UK property prices
example dataset tutorial,
except for the
ON CLUSTER
clause. | {"source_file": "02_2_shards_1_replica.md"} | [
0.10623123496770859,
-0.12267297506332397,
-0.031759489327669144,
0.02422121725976467,
-0.06947802752256393,
-0.05841623619198799,
-0.02950541488826275,
-0.07841972261667252,
-0.066134974360466,
0.02170439437031746,
-0.01440897025167942,
-0.11910367757081985,
0.10976772010326385,
-0.018378... |
cf2343be-a536-40cf-8e95-dd38f594a5ed | Notice that it is identical to the query used in the original
CREATE
statement of the
UK property prices
example dataset tutorial,
except for the
ON CLUSTER
clause.
The
ON CLUSTER
clause is designed for distributed execution of DDL (Data Definition Language)
queries such as
CREATE
,
DROP
,
ALTER
, and
RENAME
, ensuring that these
schema changes are applied across all nodes in a cluster.
You can run the query below from each host's client to confirm that the table has been created across the cluster:
sql title="Query"
SHOW TABLES IN uk;
response title="Response"
ββnameβββββββββββββββββ
1. β uk_price_paid_local β
βββββββββββββββββββββββ
Before we insert the UK price paid data, let's perform a quick experiment to see
what happens when we insert data into an ordinary table from either host.
Create a test database and table with the following query from either host:
sql
CREATE DATABASE IF NOT EXISTS test ON CLUSTER cluster_2S_1R;
CREATE TABLE test.test_table ON CLUSTER cluster_2S_1R
(
`id` UInt64,
`name` String
)
ENGINE = MergeTree()
ORDER BY id;
Now from
clickhouse-01
run the following
INSERT
query:
sql
INSERT INTO test.test_table (id, name) VALUES (1, 'Clicky McClickface');
Switch over to
clickhouse-02
and run the following
INSERT
query:
sql title="Query"
INSERT INTO test.test_table (id, name) VALUES (1, 'Alexey Milovidov');
Now from
clickhouse-01
or
clickhouse-02
run the following query:
```sql
-- from clickhouse-01
SELECT * FROM test.test_table;
-- ββidββ¬βnameββββββββββββββββ
-- 1.β 1 β Clicky McClickface β
-- ββββββ΄βββββββββββββββββββββ
--from clickhouse-02
SELECT * FROM test.test_table;
-- ββidββ¬βnameββββββββββββββββ
-- 1.β 1 β Alexey Milovidov β
-- ββββββ΄βββββββββββββββββββββ
```
You will notice that unlike with a
ReplicatedMergeTree
table only the row that was inserted into the table on that
particular host is returned and not both rows.
To read the data across the two shards, we need an interface which can handle queries
across all the shards, combining the data from both shards when we run select queries
on it or inserting data to both shards when we run insert queries.
In ClickHouse this interface is called a
distributed table
, which we create using
the
Distributed
table engine. Let's take a look at how it works.
Create a distributed table {#create-distributed-table}
Create a distributed table with the following query:
sql
CREATE TABLE test.test_table_dist ON CLUSTER cluster_2S_1R AS test.test_table
ENGINE = Distributed('cluster_2S_1R', 'test', 'test_table', rand())
In this example, the
rand()
function is chosen as the sharding key so that
inserts are randomly distributed across the shards.
Now query the distributed table from either host, and you will get back
both of the rows which were inserted on the two hosts, unlike in our previous example:
sql
SELECT * FROM test.test_table_dist; | {"source_file": "02_2_shards_1_replica.md"} | [
0.03829985857009888,
-0.05978068709373474,
0.008600610308349133,
0.10353995114564896,
-0.014349106699228287,
-0.07435654103755951,
0.036117326468229294,
-0.04978879168629646,
0.005001736339181662,
0.030141036957502365,
0.07392610609531403,
-0.13799318671226501,
0.08566639572381973,
-0.0831... |
6869290b-7cec-4f3c-aab1-25cce3a18f23 | sql
SELECT * FROM test.test_table_dist;
sql
ββidββ¬βnameββββββββββββββββ
1. β 1 β Alexey Milovidov β
2. β 1 β Clicky McClickface β
ββββββ΄βββββββββββββββββββββ
Let's do the same for our UK property prices data. From any of the host clients,
run the following query to create a distributed table using the existing table
we created previously with
ON CLUSTER
:
sql
CREATE TABLE IF NOT EXISTS uk.uk_price_paid_distributed
ON CLUSTER cluster_2S_1R
ENGINE = Distributed('cluster_2S_1R', 'uk', 'uk_price_paid_local', rand());
Insert data into a distributed table {#inserting-data-into-distributed-table}
Now connect to either of the hosts and insert the data:
sql
INSERT INTO uk.uk_price_paid_distributed
SELECT
toUInt32(price_string) AS price,
parseDateTimeBestEffortUS(time) AS date,
splitByChar(' ', postcode)[1] AS postcode1,
splitByChar(' ', postcode)[2] AS postcode2,
transform(a, ['T', 'S', 'D', 'F', 'O'], ['terraced', 'semi-detached', 'detached', 'flat', 'other']) AS type,
b = 'Y' AS is_new,
transform(c, ['F', 'L', 'U'], ['freehold', 'leasehold', 'unknown']) AS duration,
addr1,
addr2,
street,
locality,
town,
district,
county
FROM url(
'http://prod1.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv',
'CSV',
'uuid_string String,
price_string String,
time String,
postcode String,
a String,
b String,
c String,
addr1 String,
addr2 String,
street String,
locality String,
town String,
district String,
county String,
d String,
e String'
) SETTINGS max_http_get_redirects=10;
Once the data is inserted, you can check the number of rows using the distributed
table:
sql title="Query"
SELECT count(*)
FROM uk.uk_price_paid_distributed
response title="Response"
βββcount()ββ
1. β 30212555 β -- 30.21 million
ββββββββββββ
If you run the following query on either host you will see that the data has been
more or less evenly distributed across the shards (keeping in mind the choice of which
shard to insert into was set with
rand()
so results may differ for you):
```sql
-- from clickhouse-01
SELECT count(*)
FROM uk.uk_price_paid_local
-- βββcount()ββ
-- 1. β 15107353 β -- 15.11 million
-- ββββββββββββ
--from clickhouse-02
SELECT count(*)
FROM uk.uk_price_paid_local
-- βββcount()ββ
-- 1. β 15105202 β -- 15.11 million
-- ββββββββββββ
```
What will happen if one of the hosts fails? Let's simulate this by shutting down
clickhouse-01
:
bash
docker stop clickhouse-01
Check that the host is down by running:
bash
docker-compose ps | {"source_file": "02_2_shards_1_replica.md"} | [
0.044087015092372894,
-0.04167783632874489,
-0.04712659493088722,
0.10910513997077942,
-0.0717238187789917,
-0.08882331848144531,
-0.010866871103644371,
-0.014440448023378849,
-0.035138990730047226,
0.07292570918798447,
0.04506516084074974,
-0.14588144421577454,
0.07635798305273056,
-0.049... |
bf035ab2-afd1-4fc6-b338-f327389433f8 | What will happen if one of the hosts fails? Let's simulate this by shutting down
clickhouse-01
:
bash
docker stop clickhouse-01
Check that the host is down by running:
bash
docker-compose ps
response title="Response"
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
clickhouse-02 clickhouse/clickhouse-server:latest "/entrypoint.sh" clickhouse-02 X minutes ago Up X minutes 127.0.0.1:8124->8123/tcp, 127.0.0.1:9001->9000/tcp
clickhouse-keeper-01 clickhouse/clickhouse-keeper:latest-alpine "/entrypoint.sh" clickhouse-keeper-01 X minutes ago Up X minutes 127.0.0.1:9181->9181/tcp
clickhouse-keeper-02 clickhouse/clickhouse-keeper:latest-alpine "/entrypoint.sh" clickhouse-keeper-02 X minutes ago Up X minutes 127.0.0.1:9182->9181/tcp
clickhouse-keeper-03 clickhouse/clickhouse-keeper:latest-alpine "/entrypoint.sh" clickhouse-keeper-03 X minutes ago Up X minutes 127.0.0.1:9183->9181/tcp
Now from
clickhouse-02
run the same select query we ran before on the distributed
table:
sql
SELECT count(*)
FROM uk.uk_price_paid_distributed
```response title="Response"
Received exception from server (version 25.5.2):
Code: 279. DB::Exception: Received from localhost:9000. DB::Exception: All connection tries failed. Log:
Code: 32. DB::Exception: Attempt to read after eof. (ATTEMPT_TO_READ_AFTER_EOF) (version 25.5.2.47 (official build))
Code: 209. DB::NetException: Timeout: connect timed out: 192.168.7.1:9000 (clickhouse-01:9000, 192.168.7.1, local address: 192.168.7.2:37484, connection timeout 1000 ms). (SOCKET_TIMEOUT) (version 25.5.2.47 (official build))
highlight-next-line
Code: 198. DB::NetException: Not found address of host: clickhouse-01: (clickhouse-01:9000, 192.168.7.1, local address: 192.168.7.2:37484). (DNS_ERROR) (version 25.5.2.47 (official build))
: While executing Remote. (ALL_CONNECTION_TRIES_FAILED)
```
Unfortunately, our cluster is not fault-tolerant. If one of the hosts fails, the
cluster is considered unhealthy and the query fails compared to the replicated
table we saw in the
previous example
for which
we were able to insert data even when one of the hosts failed.
Conclusion {#conclusion}
The advantage of this cluster topology is that data gets distributed across
separate hosts and uses half the storage per node. More importantly, queries
are processed across both shards, which is more efficient in terms of memory
utilization and reduces I/O per host.
The main disadvantage of this cluster topology is, of course, that losing one of
the hosts renders us unable to serve queries.
In the
next example
, we'll look at how to
set up a cluster with two shards and two replicas offering both scalability and
fault tolerance. | {"source_file": "02_2_shards_1_replica.md"} | [
0.02092655561864376,
0.011635604314506054,
0.017061812803149223,
0.009041232988238335,
0.04823368787765503,
-0.11405565589666367,
-0.030647777020931244,
-0.0724196806550026,
0.048089075833559036,
0.02639090083539486,
-0.05961645022034645,
-0.0014253563713282347,
0.04748421907424927,
-0.019... |
137f0d60-0f54-4924-be6f-14ce31ddfe49 | slug: /architecture/cluster-deployment
sidebar_label: 'Replication + Scaling'
sidebar_position: 100
title: 'Replication + Scaling'
description: 'By going through this tutorial, you will learn how to set up a simple ClickHouse cluster.'
doc_type: 'guide'
keywords: ['cluster deployment', 'replication', 'sharding', 'high availability', 'scalability']
import Image from '@theme/IdealImage';
import SharedReplicatedArchitecture from '@site/static/images/deployment-guides/replication-sharding-examples/both.png';
import ConfigExplanation from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_config_explanation.mdx';
import ListenHost from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_listen_host.mdx';
import KeeperConfig from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_keeper_config.mdx';
import KeeperConfigExplanation from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_keeper_explanation.mdx';
import VerifyKeeperStatus from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_verify_keeper_using_mntr.mdx';
import DedicatedKeeperServers from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_dedicated_keeper_servers.mdx';
import ExampleFiles from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_working_example.mdx';
import CloudTip from '@site/docs/deployment-guides/replication-sharding-examples/_snippets/_cloud_tip.mdx';
In this example, you'll learn how to set up a simple ClickHouse cluster which
both replicates and scales. It consisting of two shards and two replicas with a
3-node ClickHouse Keeper cluster for managing coordination and keeping quorum
in the cluster.
The architecture of the cluster you will be setting up is shown below:
Prerequisites {#prerequisites}
You've set up a
local ClickHouse server
before
You are familiar with basic configuration concepts of ClickHouse such as
configuration files
You have docker installed on your machine
Set up directory structure and test environment {#set-up}
In this tutorial, you will use
Docker compose
to
set up the ClickHouse cluster. This setup could be modified to work
for separate local machines, virtual machines or cloud instances as well.
Run the following commands to set up the directory structure for this example:
```bash
mkdir cluster_2S_2R
cd cluster_2S_2R
Create clickhouse-keeper directories
for i in {01..03}; do
mkdir -p fs/volumes/clickhouse-keeper-${i}/etc/clickhouse-keeper
done
Create clickhouse-server directories
for i in {01..04}; do
mkdir -p fs/volumes/clickhouse-${i}/etc/clickhouse-server
done
```
Add the following
docker-compose.yml
file to the
clickhouse-cluster
directory: | {"source_file": "03_2_shards_2_replicas.md"} | [
0.04255568981170654,
-0.06637857854366302,
-0.06824539601802826,
-0.015296726487576962,
-0.02716299146413803,
-0.015828177332878113,
-0.056468117982149124,
-0.007354063913226128,
-0.08051462471485138,
0.041150566190481186,
0.043741390109062195,
0.0012877177214249969,
0.12157833576202393,
-... |
1cab76da-4c61-4b42-83a0-5dbc50d070e6 | yaml title="docker-compose.yml"
version: '3.8'
services:
clickhouse-01:
image: "clickhouse/clickhouse-server:latest"
user: "101:101"
container_name: clickhouse-01
hostname: clickhouse-01
volumes:
- ${PWD}/fs/volumes/clickhouse-01/etc/clickhouse-server/config.d/config.xml:/etc/clickhouse-server/config.d/config.xml
- ${PWD}/fs/volumes/clickhouse-01/etc/clickhouse-server/users.d/users.xml:/etc/clickhouse-server/users.d/users.xml
ports:
- "127.0.0.1:8123:8123"
- "127.0.0.1:9000:9000"
depends_on:
- clickhouse-keeper-01
- clickhouse-keeper-02
- clickhouse-keeper-03
clickhouse-02:
image: "clickhouse/clickhouse-server:latest"
user: "101:101"
container_name: clickhouse-02
hostname: clickhouse-02
volumes:
- ${PWD}/fs/volumes/clickhouse-02/etc/clickhouse-server/config.d/config.xml:/etc/clickhouse-server/config.d/config.xml
- ${PWD}/fs/volumes/clickhouse-02/etc/clickhouse-server/users.d/users.xml:/etc/clickhouse-server/users.d/users.xml
ports:
- "127.0.0.1:8124:8123"
- "127.0.0.1:9001:9000"
depends_on:
- clickhouse-keeper-01
- clickhouse-keeper-02
- clickhouse-keeper-03
clickhouse-03:
image: "clickhouse/clickhouse-server:latest"
user: "101:101"
container_name: clickhouse-03
hostname: clickhouse-03
volumes:
- ${PWD}/fs/volumes/clickhouse-03/etc/clickhouse-server/config.d/config.xml:/etc/clickhouse-server/config.d/config.xml
- ${PWD}/fs/volumes/clickhouse-03/etc/clickhouse-server/users.d/users.xml:/etc/clickhouse-server/users.d/users.xml
ports:
- "127.0.0.1:8125:8123"
- "127.0.0.1:9002:9000"
depends_on:
- clickhouse-keeper-01
- clickhouse-keeper-02
- clickhouse-keeper-03
clickhouse-04:
image: "clickhouse/clickhouse-server:latest"
user: "101:101"
container_name: clickhouse-04
hostname: clickhouse-04
volumes:
- ${PWD}/fs/volumes/clickhouse-04/etc/clickhouse-server/config.d/config.xml:/etc/clickhouse-server/config.d/config.xml
- ${PWD}/fs/volumes/clickhouse-04/etc/clickhouse-server/users.d/users.xml:/etc/clickhouse-server/users.d/users.xml
ports:
- "127.0.0.1:8126:8123"
- "127.0.0.1:9003:9000"
depends_on:
- clickhouse-keeper-01
- clickhouse-keeper-02
- clickhouse-keeper-03
clickhouse-keeper-01:
image: "clickhouse/clickhouse-keeper:latest-alpine"
user: "101:101"
container_name: clickhouse-keeper-01
hostname: clickhouse-keeper-01
volumes:
- ${PWD}/fs/volumes/clickhouse-keeper-01/etc/clickhouse-keeper/keeper_config.xml:/etc/clickhouse-keeper/keeper_config.xml
ports:
- "127.0.0.1:9181:9181"
clickhouse-keeper-02:
image: "clickhouse/clickhouse-keeper:latest-alpine"
user: "101:101"
container_name: clickhouse-keeper-02
hostname: clickhouse-keeper-02
volumes: | {"source_file": "03_2_shards_2_replicas.md"} | [
0.043049074709415436,
-0.04402191564440727,
0.01415086630731821,
-0.05136926472187042,
-0.016132168471813202,
-0.07555131614208221,
0.019540023058652878,
-0.08449693769216537,
0.014614815823733807,
-0.0213790126144886,
0.0319284126162529,
-0.030096836388111115,
-0.03245821222662926,
0.0410... |
d943752f-4945-44db-aba2-714b36b1aab5 | clickhouse-keeper-02:
image: "clickhouse/clickhouse-keeper:latest-alpine"
user: "101:101"
container_name: clickhouse-keeper-02
hostname: clickhouse-keeper-02
volumes:
- ${PWD}/fs/volumes/clickhouse-keeper-02/etc/clickhouse-keeper/keeper_config.xml:/etc/clickhouse-keeper/keeper_config.xml
ports:
- "127.0.0.1:9182:9181"
clickhouse-keeper-03:
image: "clickhouse/clickhouse-keeper:latest-alpine"
user: "101:101"
container_name: clickhouse-keeper-03
hostname: clickhouse-keeper-03
volumes:
- ${PWD}/fs/volumes/clickhouse-keeper-03/etc/clickhouse-keeper/keeper_config.xml:/etc/clickhouse-keeper/keeper_config.xml
ports:
- "127.0.0.1:9183:9181" | {"source_file": "03_2_shards_2_replicas.md"} | [
0.003579542739316821,
-0.013333417475223541,
-0.042526181787252426,
-0.03775861859321594,
0.029927968978881836,
-0.08375805616378784,
0.024857979267835617,
-0.0941709503531456,
-0.011756690219044685,
-0.020015474408864975,
0.03830551356077194,
0.012263922020792961,
0.020870396867394447,
0.... |
ea059318-0b6a-41d4-a492-7f5e408e601e | Create the following sub-directories and files:
bash
for i in {01..04}; do
mkdir -p fs/volumes/clickhouse-${i}/etc/clickhouse-server/config.d
mkdir -p fs/volumes/clickhouse-${i}/etc/clickhouse-server/users.d
touch fs/volumes/clickhouse-${i}/etc/clickhouse-server/config.d/config.xml
touch fs/volumes/clickhouse-${i}/etc/clickhouse-server/users.d/users.xml
done
Configure ClickHouse nodes {#configure-clickhouse-servers}
Server setup {#server-setup}
Now modify each empty configuration file
config.xml
located at
fs/volumes/clickhouse-{}/etc/clickhouse-server/config.d
. The lines which are
highlighted below need to be changed to be specific to each node:
xml
<clickhouse replace="true">
<logger>
<level>debug</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>1000M</size>
<count>3</count>
</logger>
<!--highlight-next-line-->
<display_name>cluster_2S_2R node 1</display_name>
<listen_host>0.0.0.0</listen_host>
<http_port>8123</http_port>
<tcp_port>9000</tcp_port>
<user_directories>
<users_xml>
<path>users.xml</path>
</users_xml>
<local_directory>
<path>/var/lib/clickhouse/access/</path>
</local_directory>
</user_directories>
<distributed_ddl>
<path>/clickhouse/task_queue/ddl</path>
</distributed_ddl>
<remote_servers>
<cluster_2S_2R>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>clickhouse-01</host>
<port>9000</port>
</replica>
<replica>
<host>clickhouse-03</host>
<port>9000</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>clickhouse-02</host>
<port>9000</port>
</replica>
<replica>
<host>clickhouse-04</host>
<port>9000</port>
</replica>
</shard>
</cluster_2S_2R>
</remote_servers>
<zookeeper>
<node>
<host>clickhouse-keeper-01</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-02</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-03</host>
<port>9181</port>
</node>
</zookeeper>
<!--highlight-start-->
<macros>
<shard>01</shard>
<replica>01</replica>
</macros>
<!--highlight-end-->
</clickhouse> | {"source_file": "03_2_shards_2_replicas.md"} | [
0.03693731129169464,
-0.06828110665082932,
0.00293274805881083,
-0.02931293472647667,
-0.024305319413542747,
-0.060990624129772186,
0.062021687626838684,
-0.010258246213197708,
-0.04093451052904129,
0.003066419158130884,
0.05039513111114502,
-0.013010147958993912,
0.047530703246593475,
0.0... |
5b2ea83a-799e-4093-a9c5-9b0bb78f45a6 | | Directory | File |
|-----------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
fs/volumes/clickhouse-01/etc/clickhouse-server/config.d
|
config.xml
|
|
fs/volumes/clickhouse-02/etc/clickhouse-server/config.d
|
config.xml
|
|
fs/volumes/clickhouse-03/etc/clickhouse-server/config.d
|
config.xml
|
|
fs/volumes/clickhouse-04/etc/clickhouse-server/config.d
|
config.xml
|
Each section of the above configuration file is explained in more detail below.
Networking and logging {#networking}
Logging configuration is defined in the
<logger>
block. This example configuration gives
you a debug log that will roll over at 1000M three times:
xml
<logger>
<level>debug</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>1000M</size>
<count>3</count>
</logger>
For more information on logging configuration, see the comments included in the
default ClickHouse
configuration file
.
Cluster configuration {#cluster-config}
Configuration for the cluster is set up in the
<remote_servers>
block.
Here the cluster name
cluster_2S_2R
is defined.
The
<cluster_2S_2R></cluster_2S_2R>
block defines the layout of the cluster,
using the
<shard></shard>
and
<replica></replica>
settings, and acts as a
template for distributed DDL queries, which are queries that execute across the
cluster using the
ON CLUSTER
clause. By default, distributed DDL queries
are allowed, but can also be turned off with setting
allow_distributed_ddl_queries
.
internal_replication
is set to true so that data is written to just one of the replicas.
```xml
true
clickhouse-01
9000
clickhouse-03
9000
true
clickhouse-02
9000
clickhouse-04
9000
```
The
<cluster_2S_2R></cluster_2S_2R>
section defines the layout of the cluster,
and acts as a template for distributed DDL queries, which are queries that execute
across the cluster using the
ON CLUSTER
clause.
Keeper configuration {#keeper-config-explanation}
The
<ZooKeeper>
section tells ClickHouse where ClickHouse Keeper (or ZooKeeper) is running.
As we are using a ClickHouse Keeper cluster, each
<node>
of the cluster needs to be specified,
along with its hostname and port number using the
<host>
and
<port>
tags respectively.
Set up of ClickHouse Keeper is explained in the next step of the tutorial. | {"source_file": "03_2_shards_2_replicas.md"} | [
-0.048409972339868546,
0.05409681424498558,
-0.09214160591363907,
-0.02916708216071129,
-0.02550552226603031,
-0.020091354846954346,
0.038994669914245605,
0.08374205231666565,
0.015796713531017303,
-0.004618654493242502,
0.11647093296051025,
0.04473418369889259,
-0.010443735867738724,
-0.0... |
e194440c-f00c-4d65-9659-9720b3fb71dc | Set up of ClickHouse Keeper is explained in the next step of the tutorial.
xml
<zookeeper>
<node>
<host>clickhouse-keeper-01</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-02</host>
<port>9181</port>
</node>
<node>
<host>clickhouse-keeper-03</host>
<port>9181</port>
</node>
</zookeeper>
:::note
Although it is possible to run ClickHouse Keeper on the same server as ClickHouse Server,
in production environments we strongly recommend that ClickHouse Keeper runs on dedicated hosts.
:::
Macros configuration {#macros-config-explanation}
Additionally, the
<macros>
section is used to define parameter substitutions for
replicated tables. These are listed in
system.macros
and allow using substitutions
like
{shard}
and
{replica}
in queries.
xml
<macros>
<shard>01</shard>
<replica>01</replica>
</macros>
User configuration {#cluster-configuration}
Now modify each empty configuration file
users.xml
located at
fs/volumes/clickhouse-{}/etc/clickhouse-server/users.d
with the following:
```xml title="/users.d/users.xml"
10000000000
0
in_order
1
1
default
::/0
default
1
1
1
1
3600
0
0
0
0
0
```
In this example, the default user is configured without a password for simplicity.
In practice, this is discouraged.
:::note
In this example, each
users.xml
file is identical for all nodes in the cluster.
:::
Configure ClickHouse Keeper {#configure-clickhouse-keeper-nodes}
Next you will configure ClickHouse Keeper, which is used for coordination.
Keeper setup {#configuration-explanation}
| Directory | File |
|------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
fs/volumes/clickhouse-keeper-01/etc/clickhouse-keeper
|
keeper_config.xml
|
|
fs/volumes/clickhouse-keeper-02/etc/clickhouse-keeper
|
keeper_config.xml
|
|
fs/volumes/clickhouse-keeper-03/etc/clickhouse-keeper
|
keeper_config.xml
|
Test the setup {#test-the-setup}
Make sure that docker is running on your machine.
Start the cluster using the
docker-compose up
command from the root of the
cluster_2S_2R
directory:
bash
docker-compose up -d
You should see docker begin to pull the ClickHouse and Keeper images,
and then start the containers: | {"source_file": "03_2_shards_2_replicas.md"} | [
0.004490493331104517,
-0.04709390178322792,
-0.047659944742918015,
0.0000389440574508626,
-0.022017333656549454,
-0.0753374919295311,
-0.0006718741497024894,
-0.057807885110378265,
-0.038979776203632355,
0.054916027933359146,
0.01423670630902052,
-0.051675163209438324,
0.10003921389579773,
... |
d5a2bec5-cb71-4bba-bae8-29cb3e58dbe0 | bash
docker-compose up -d
You should see docker begin to pull the ClickHouse and Keeper images,
and then start the containers:
bash
[+] Running 8/8
β Network cluster_2s_2r_default Created
β Container clickhouse-keeper-03 Started
β Container clickhouse-keeper-02 Started
β Container clickhouse-keeper-01 Started
β Container clickhouse-01 Started
β Container clickhouse-02 Started
β Container clickhouse-04 Started
β Container clickhouse-03 Started
To verify that the cluster is running, connect to any one of the nodes and run the
following query. The command to connect to the first node is shown:
```bash
Connect to any node
docker exec -it clickhouse-01 clickhouse-client
```
If successful, you will see the ClickHouse client prompt:
response
cluster_2S_2R node 1 :)
Run the following query to check what cluster topologies are defined for which
hosts:
sql title="Query"
SELECT
cluster,
shard_num,
replica_num,
host_name,
port
FROM system.clusters;
response title="Response"
ββclusterββββββββ¬βshard_numββ¬βreplica_numββ¬βhost_nameββββββ¬βportββ
1. β cluster_2S_2R β 1 β 1 β clickhouse-01 β 9000 β
2. β cluster_2S_2R β 1 β 2 β clickhouse-03 β 9000 β
3. β cluster_2S_2R β 2 β 1 β clickhouse-02 β 9000 β
4. β cluster_2S_2R β 2 β 2 β clickhouse-04 β 9000 β
5. β default β 1 β 1 β localhost β 9000 β
βββββββββββββββββ΄ββββββββββββ΄ββββββββββββββ΄ββββββββββββββββ΄βββββββ
Run the following query to check the status of the ClickHouse Keeper cluster:
sql title="Query"
SELECT *
FROM system.zookeeper
WHERE path IN ('/', '/clickhouse')
response title="Response"
ββnameββββββββ¬βvalueββ¬βpathβββββββββ
1. β task_queue β β /clickhouse β
2. β sessions β β /clickhouse β
3. β keeper β β / β
4. β clickhouse β β / β
ββββββββββββββ΄ββββββββ΄ββββββββββββββ
With this, you have successfully set up a ClickHouse cluster with two shards and two replicas.
In the next step, you will create a table in the cluster.
Create a database {#creating-a-database}
Now that you have verified the cluster is correctly setup and is running, you
will be recreating the same table as the one used in the
UK property prices
example dataset tutorial. It consists of around 30 million rows of prices paid
for real-estate property in England and Wales since 1995.
Connect to the client of each host by running each of the following commands from separate terminal
tabs or windows:
bash
docker exec -it clickhouse-01 clickhouse-client
docker exec -it clickhouse-02 clickhouse-client
docker exec -it clickhouse-03 clickhouse-client
docker exec -it clickhouse-04 clickhouse-client
You can run the query below from clickhouse-client of each host to confirm that there are no databases created yet,
apart from the default ones: | {"source_file": "03_2_shards_2_replicas.md"} | [
0.09788649529218674,
-0.03980764374136925,
-0.0451727919280529,
0.00767629686743021,
0.04151606932282448,
-0.03160542994737625,
-0.029286470264196396,
-0.060414962470531464,
0.0023110664915293455,
0.022424137219786644,
-0.022238600999116898,
-0.0791555792093277,
0.04845904931426048,
-0.018... |
4bb43711-77da-4679-a6c9-0094e0102423 | You can run the query below from clickhouse-client of each host to confirm that there are no databases created yet,
apart from the default ones:
sql title="Query"
SHOW DATABASES;
response title="Response"
ββnameββββββββββββββββ
1. β INFORMATION_SCHEMA β
2. β default β
3. β information_schema β
4. β system β
ββββββββββββββββββββββ
From the
clickhouse-01
client run the following
distributed
DDL query using the
ON CLUSTER
clause to create a new database called
uk
:
sql
CREATE DATABASE IF NOT EXISTS uk
-- highlight-next-line
ON CLUSTER cluster_2S_2R;
You can again run the same query as before from the client of each host
to confirm that the database has been created across the cluster despite running
the query only from
clickhouse-01
:
sql
SHOW DATABASES;
```response
ββnameββββββββββββββββ
1. β INFORMATION_SCHEMA β
2. β default β
3. β information_schema β
4. β system β
highlight-next-line
β uk β
ββββββββββββββββββββββ
```
Create a table on the cluster {#creating-a-table}
Now that the database has been created, next you will create a table with replication.
Run the following query from any of the host clients:
sql
CREATE TABLE IF NOT EXISTS uk.uk_price_paid_local
--highlight-next-line
ON CLUSTER cluster_2S_2R
(
price UInt32,
date Date,
postcode1 LowCardinality(String),
postcode2 LowCardinality(String),
type Enum8('terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4, 'other' = 0),
is_new UInt8,
duration Enum8('freehold' = 1, 'leasehold' = 2, 'unknown' = 0),
addr1 String,
addr2 String,
street LowCardinality(String),
locality LowCardinality(String),
town LowCardinality(String),
district LowCardinality(String),
county LowCardinality(String)
)
--highlight-next-line
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{database}/{table}/{shard}', '{replica}')
ORDER BY (postcode1, postcode2, addr1, addr2);
Notice that it is identical to the query used in the original
CREATE
statement of the
UK property prices
example dataset tutorial,
except for the
ON CLUSTER
clause and use of the
ReplicatedMergeTree
engine.
The
ON CLUSTER
clause is designed for distributed execution of DDL (Data Definition Language)
queries such as
CREATE
,
DROP
,
ALTER
, and
RENAME
, ensuring that these
schema changes are applied across all nodes in a cluster.
The
ReplicatedMergeTree
engine works just as the ordinary
MergeTree
table engine, but it will also replicate the data.
It requires two parameters to be specified:
zoo_path
: The Keeper/ZooKeeper path to the table's metadata.
replica_name
: The table's replica name.
The
zoo_path
parameter can be set to anything you choose, although it is recommended to follow
the convention of using prefix
text
/clickhouse/tables/{shard}/{database}/{table} | {"source_file": "03_2_shards_2_replicas.md"} | [
0.07225564122200012,
-0.19290438294410706,
-0.02618211880326271,
0.05686071142554283,
-0.0018065741751343012,
-0.08715343475341797,
-0.018148599192500114,
-0.10500140488147736,
0.004804505500942469,
0.017444299533963203,
0.052632469683885574,
-0.09079286456108093,
0.12183737009763718,
-0.0... |
7fa4002f-542a-446a-8d22-29ebd68c2580 | The
zoo_path
parameter can be set to anything you choose, although it is recommended to follow
the convention of using prefix
text
/clickhouse/tables/{shard}/{database}/{table}
where:
-
{database}
and
{table}
will be replaced automatically.
-
{shard}
and
{replica}
are macros which were
defined
previously in the
config.xml
file of each ClickHouse node.
You can run the query below from each host's client to confirm that the table has been created across the cluster:
sql title="Query"
SHOW TABLES IN uk;
response title="Response"
ββnameβββββββββββββββββ
1. β uk_price_paid_local β
βββββββββββββββββββββββ
Insert data into a distributed table {#inserting-data-using-distributed}
To insert data into the table,
ON CLUSTER
cannot be used as it does
not apply to DML (Data Manipulation Language) queries such as
INSERT
,
UPDATE
,
and
DELETE
. To insert data, it is necessary to make use of the
Distributed
table engine.
As you learned in the
guide
for setting up a cluster with 2 shards and 1 replica, distributed tables are tables which have access to shards located on different
hosts and are defined using the
Distributed
table engine.
The distributed table acts as the interface across all the shards in the cluster.
From any of the host clients, run the following query to create a distributed table
using the existing replicated table we created in the previous step:
sql
CREATE TABLE IF NOT EXISTS uk.uk_price_paid_distributed
ON CLUSTER cluster_2S_2R
ENGINE = Distributed('cluster_2S_2R', 'uk', 'uk_price_paid_local', rand());
On each host you will now see the following tables in the
uk
database:
sql
ββnameβββββββββββββββββββββββ
1. β uk_price_paid_distributed β
2. β uk_price_paid_local β
βββββββββββββββββββββββββββββ
Data can be inserted into the
uk_price_paid_distributed
table from any of the
host clients using the following query:
sql
INSERT INTO uk.uk_price_paid_distributed
SELECT
toUInt32(price_string) AS price,
parseDateTimeBestEffortUS(time) AS date,
splitByChar(' ', postcode)[1] AS postcode1,
splitByChar(' ', postcode)[2] AS postcode2,
transform(a, ['T', 'S', 'D', 'F', 'O'], ['terraced', 'semi-detached', 'detached', 'flat', 'other']) AS type,
b = 'Y' AS is_new,
transform(c, ['F', 'L', 'U'], ['freehold', 'leasehold', 'unknown']) AS duration,
addr1,
addr2,
street,
locality,
town,
district,
county
FROM url(
'http://prod1.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv',
'CSV',
'uuid_string String,
price_string String,
time String,
postcode String,
a String,
b String,
c String,
addr1 String,
addr2 String,
street String,
locality String,
town String,
district String,
county String,
d String,
e String'
) SETTINGS max_http_get_redirects=10; | {"source_file": "03_2_shards_2_replicas.md"} | [
0.03681676462292671,
-0.0778694823384285,
-0.05575643479824066,
0.05591125413775444,
-0.04308434575796127,
-0.06551836431026459,
-0.029891237616539,
-0.04897672310471535,
-0.028644679114222527,
0.04421522095799446,
0.024979108944535255,
-0.0680978074669838,
0.10524451732635498,
-0.01030758... |
a8607c8c-b01d-4bbe-a96c-9c6ac6d6b5e7 | Run the following query to confirm that the data inserted has been evenly distributed
across the nodes of our cluster:
```sql
SELECT count(*)
FROM uk.uk_price_paid_distributed;
SELECT count(*) FROM uk.uk_price_paid_local;
```
```response
βββcount()ββ
1. β 30212555 β -- 30.21 million
ββββββββββββ
βββcount()ββ
1. β 15105983 β -- 15.11 million
ββββββββββββ
```
Conclusion {#conclusion}
The advantage of this cluster topology with 2 shards and 2 replicas is that it provides both scalability and fault tolerance.
Data is distributed across separate hosts, reducing storage and I/O requirements per node, while queries are processed in parallel across both shards for improved performance and memory efficiency.
Critically, the cluster can tolerate the loss of one node and continue serving queries without interruption, as each shard has a backup replica available on another node.
The main disadvantage of this cluster topology is the increased storage overheadβit requires twice the storage capacity compared to a setup without replicas, as each shard is duplicated.
Additionally, while the cluster can survive a single node failure, losing two nodes simultaneously may render the cluster inoperable, depending on which nodes fail and how shards are distributed.
This topology strikes a balance between availability and cost, making it suitable for production environments where some level of fault tolerance is required without the expense of higher replication factors.
To learn how ClickHouse Cloud processes queries, offering both scalability and fault-tolerance, see the section
"Parallel Replicas"
. | {"source_file": "03_2_shards_2_replicas.md"} | [
0.03689116612076759,
-0.03044457919895649,
0.03788159415125847,
0.08297005295753479,
0.03210993856191635,
-0.06189953535795212,
-0.07323512434959412,
-0.001865030382759869,
0.06568320840597153,
0.03672377020120621,
-0.025774065405130386,
-0.002228186232969165,
0.10989545285701752,
-0.03166... |
9151746e-12a3-449a-89e5-3024b85c4c05 | slug: /whats-new/changelog/2021
sidebar_position: 6
sidebar_label: '2021'
title: '2021 Changelog'
description: 'Changelog for 2021'
doc_type: 'changelog'
keywords: ['ClickHouse 2021', 'changelog 2021', 'release notes', 'version history', 'new features']
ClickHouse release v21.12, 2021-12-15 {#clickhouse-release-v2112-2021-12-15}
Backward Incompatible Change {#backward-incompatible-change}
A fix for a feature that previously had unwanted behaviour.
Do not allow direct select for Kafka/RabbitMQ/FileLog. Can be enabled by setting
stream_like_engine_allow_direct_select
. Direct select will be not allowed even if enabled by setting, in case there is an attached materialized view. For Kafka and RabbitMQ direct selectm if allowed, will not commit massages by default. To enable commits with direct select, user must use storage level setting
kafka{rabbitmq}_commit_on_select=1
(default
0
).
#31053
(
Kseniia Sumarokova
).
A slight change in behaviour of a new function.
Return unquoted string in JSON_VALUE. Closes
#27965
.
#31008
(
Kseniia Sumarokova
).
Setting rename.
Add custom null representation support for TSV/CSV input formats. Fix deserialing Nullable(String) in TSV/CSV/JSONCompactStringsEachRow/JSONStringsEachRow input formats. Rename
output_format_csv_null_representation
and
output_format_tsv_null_representation
to
format_csv_null_representation
and
format_tsv_null_representation
accordingly.
#30497
(
Kruglov Pavel
).
Further deprecation of already unused code.
This is relevant only for users of ClickHouse versions older than 20.6. A "leader election" mechanism is removed from
ReplicatedMergeTree
, because multiple leaders are supported since 20.6. If you are upgrading from an older version and some replica with an old version is a leader, then server will fail to start after upgrade. Stop replicas with old version to make new version start. After that it will not be possible to downgrade to version older than 20.6.
#32140
(
tavplubix
).
New Feature {#new-feature}
Implemented more of the ZooKeeper Four Letter Words commands in clickhouse-keeper: https://zookeeper.apache.org/doc/r3.4.8/zookeeperAdmin.html#sc_zkCommands.
#28981
(
JackyWoo
). Now
clickhouse-keeper
is feature complete.
Support for
Bool
data type.
#31072
(
kevin wan
).
Support for
PARTITION BY
in File, URL, HDFS storages and with
INSERT INTO
table function. Closes
#30273
.
#30690
(
Kseniia Sumarokova
).
Added
CONSTRAINT ... ASSUME ...
(without checking during
INSERT
). Added query transformation to CNF (https://github.com/ClickHouse/ClickHouse/issues/11749) for more convenient optimization. Added simple query rewriting using constraints (only simple matching now, will be improved to support <,=,>... later). Added ability to replace heavy columns with light columns if it's possible.
#18787
(
Nikita Vasilev
).
Basic access authentication for http/url functions.
#31648
(
michael1589
). | {"source_file": "2021.md"} | [
0.019845636561512947,
-0.04451178014278412,
0.0640878900885582,
0.019512716680765152,
-0.002880639396607876,
-0.01151884626597166,
-0.04797450825572014,
-0.05108608305454254,
0.033753104507923126,
0.16355682909488678,
0.03771590813994408,
-0.018253032118082047,
-0.055799853056669235,
0.028... |
8a58a76e-5246-4e70-9bb6-ffe212bc6e30 | Basic access authentication for http/url functions.
#31648
(
michael1589
).
Support
INTERVAL
type in
STEP
clause for
WITH FILL
modifier.
#30927
(
Anton Popov
).
Add support for parallel reading from multiple files and support globs in
FROM INFILE
clause.
#30135
(
Filatenkov Artur
).
Add support for
Identifier
table and database query parameters. Closes
#27226
.
#28668
(
Nikolay Degterinsky
).
TLDR: Major improvements of completeness and consistency of text formats.
Refactor formats
TSV
,
TSVRaw
,
CSV
and
JSONCompactEachRow
,
JSONCompactStringsEachRow
, remove code duplication, add base interface for formats with
-WithNames
and
-WithNamesAndTypes
suffixes. Add formats
CSVWithNamesAndTypes
,
TSVRawWithNames
,
TSVRawWithNamesAndTypes
,
JSONCompactEachRowWIthNames
,
JSONCompactStringsEachRowWIthNames
,
RowBinaryWithNames
. Support parallel parsing for formats
TSVWithNamesAndTypes
,
TSVRaw(WithNames/WIthNamesAndTypes)
,
CSVWithNamesAndTypes
,
JSONCompactEachRow(WithNames/WIthNamesAndTypes)
,
JSONCompactStringsEachRow(WithNames/WIthNamesAndTypes)
. Support columns mapping and types checking for
RowBinaryWithNamesAndTypes
format. Add setting
input_format_with_types_use_header
which specify if we should check that types written in
<format_name>WIthNamesAndTypes
format matches with table structure. Add setting
input_format_csv_empty_as_default
and use it in CSV format instead of
input_format_defaults_for_omitted_fields
(because this setting should not control
csv_empty_as_default
). Fix usage of setting
input_format_defaults_for_omitted_fields
(it was used only as
csv_empty_as_default
, but it should control calculation of default expressions for omitted fields). Fix Nullable input/output in
TSVRaw
format, make this format fully compatible with inserting into TSV. Fix inserting NULLs in
LowCardinality(Nullable)
when
input_format_null_as_default
is enabled (previously default values was inserted instead of actual NULLs). Fix strings deserialization in
JSONStringsEachRow
/
JSONCompactStringsEachRow
formats (strings were parsed just until first '\n' or '\t'). Add ability to use
Raw
escaping rule in Template input format. Add diagnostic info for JSONCompactEachRow(WithNames/WIthNamesAndTypes) input format. Fix bug with parallel parsing of
-WithNames
formats in case when setting
min_chunk_bytes_for_parallel_parsing
is less than bytes in a single row.
#30178
(
Kruglov Pavel
). Allow to print/parse names and types of colums in
CustomSeparated
input/output format. Add formats
CustomSeparatedWithNames/WithNamesAndTypes
similar to
TSVWithNames/WithNamesAndTypes
.
#31434
(
Kruglov Pavel
).
Aliyun OSS Storage support.
#31286
(
cfcz48
).
Exposes all settings of the global thread pool in the configuration file.
#31285
(
TomΓ‘Ε‘ Hromada
). | {"source_file": "2021.md"} | [
-0.1148177981376648,
-0.006419754587113857,
-0.11505281925201416,
-0.014629477635025978,
-0.08258422464132309,
-0.02938275970518589,
-0.04097503796219826,
0.06179877370595932,
-0.041026677936315536,
0.03821375593543053,
-0.006324872840195894,
-0.018657993525266647,
0.061094243079423904,
-0... |
a98dca12-5991-41cb-86b9-b37c4107d8ca | Aliyun OSS Storage support.
#31286
(
cfcz48
).
Exposes all settings of the global thread pool in the configuration file.
#31285
(
TomΓ‘Ε‘ Hromada
).
Introduced window functions
exponentialTimeDecayedSum
,
exponentialTimeDecayedMax
,
exponentialTimeDecayedCount
and
exponentialTimeDecayedAvg
which are more effective than
exponentialMovingAverage
for bigger windows. Also more use-cases were covered.
#29799
(
Vladimir Chebotarev
).
Add option to compress logs before writing them to a file using LZ4. Closes
#23860
.
#29219
(
Nikolay Degterinsky
).
Support
JOIN ON 1 = 1
that have CROSS JOIN semantic. This closes
#25578
.
#25894
(
Vladimir C
).
Add Map combinator for
Map
type. - Rename old
sum-, min-, max- Map
for mapped arrays to
sum-, min-, max- MappedArrays
.
#24539
(
Ildus Kurbangaliev
).
Make reading from HTTP retriable. Closes
#29696
.
#29894
(
Kseniia Sumarokova
).
Experimental Feature {#experimental-feature}
WINDOW VIEW
to enable stream processing in ClickHouse.
#8331
(
vxider
).
Drop support for using Ordinary databases with
MaterializedMySQL
.
#31292
(
Stig Bakken
).
Implement the commands BACKUP and RESTORE for the Log family. This feature is under development.
#30688
(
Vitaly Baranov
).
Performance Improvement {#performance-improvement}
Reduce memory usage when reading with
s3
/
url
/
hdfs
formats
Parquet
,
ORC
,
Arrow
(controlled by setting
input_format_allow_seeks
, enabled by default). Also add setting
remote_read_min_bytes_for_seek
to control seeks. Closes
#10461
. Closes
#16857
.
#30936
(
Kseniia Sumarokova
).
Add optimizations for constant conditions in JOIN ON, ref
#26928
.
#27021
(
Vladimir C
).
Support parallel formatting for all text formats, except
JSONEachRowWithProgress
and
PrettyCompactMonoBlock
.
#31489
(
Kruglov Pavel
).
Speed up count over nullable columns.
#31806
(
RaΓΊl MarΓn
).
Speed up
avg
and
sumCount
aggregate functions.
#31694
(
RaΓΊl MarΓn
).
Improve performance of JSON and XML output formats.
#31673
(
alexey-milovidov
).
Improve performance of syncing data to block device. This closes
#31181
.
#31229
(
zhanglistar
).
Fixing query performance issue in
LiveView
tables. Fixes
#30831
.
#31006
(
vzakaznikov
).
Speed up query parsing.
#31949
(
RaΓΊl MarΓn
).
Allow to split
GraphiteMergeTree
rollup rules for plain/tagged metrics (optional
rule_type
field).
#25122
(
Michail Safronov
).
Remove excessive
DESC TABLE
requests for
remote()
(in case of
remote('127.1', system.one)
(i.e. identifier as the db.table instead of string) there was excessive
DESC TABLE
request).
#32019
(
Azat Khuzhin
).
Optimize function
tupleElement
to reading of subcolumn with enabled setting
optimize_functions_to_subcolumns
.
#31261
(
Anton Popov
).
Optimize function
mapContains
to reading of subcolumn
key
with enabled settings
optimize_functions_to_subcolumns
.
#31218
(
Anton Popov
). | {"source_file": "2021.md"} | [
-0.056933868676424026,
0.014168246649205685,
-0.09149120002985,
0.04253481328487396,
-0.018724219873547554,
-0.09750885516405106,
-0.034306012094020844,
-0.009597414173185825,
0.011317056603729725,
0.01713467948138714,
0.04405336081981659,
-0.052509475499391556,
0.016490699723362923,
-0.00... |
a1adaacb-ce04-4c81-beda-92896690f718 | Optimize function
mapContains
to reading of subcolumn
key
with enabled settings
optimize_functions_to_subcolumns
.
#31218
(
Anton Popov
).
Add settings
merge_tree_min_rows_for_concurrent_read_for_remote_filesystem
and
merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem
.
#30970
(
Kseniia Sumarokova
).
Skipping mutations of different partitions in
StorageMergeTree
.
#21326
(
Vladimir Chebotarev
).
Improvement {#improvement}
Do not allow to drop a table or dictionary if some tables or dictionaries depend on it.
#30977
(
tavplubix
).
Allow versioning of aggregate function states. Now we can introduce backward compatible changes in serialization format of aggregate function states. Closes
#12552
.
#24820
(
Kseniia Sumarokova
).
Support PostgreSQL style
ALTER MODIFY COLUMN
syntax.
#32003
(
SuperDJY
).
Added
update_field
support for
RangeHashedDictionary
,
ComplexKeyRangeHashedDictionary
.
#32185
(
Maksim Kita
).
The
murmurHash3_128
and
sipHash128
functions now accept an arbitrary number of arguments. This closes
#28774
.
#28965
(
ε°θ·―
).
Support default expression for
HDFS
storage and optimize fetching when source is column oriented.
#32256
(
ζζ¬
).
Improve the operation name of an opentelemetry span.
#32234
(
Frank Chen
).
Use
Content-Type: application/x-ndjson
(http://ndjson.org/) for output format
JSONEachRow
.
#32223
(
Dmitriy Dorofeev
).
Improve skipping unknown fields with quoted escaping rule in Template/CustomSeparated formats. Previously you could skip only quoted strings, now you can skip values with any type.
#32204
(
Kruglov Pavel
).
Now
clickhouse-keeper
refuses to start or apply configuration changes when they contain duplicated IDs or endpoints. Fixes
#31339
.
#32121
(
alesapin
).
Set Content-Type in HTTP packets issued from URL engine.
#32113
(
Frank Chen
).
Return Content-Type as 'application/json' for
JSONEachRow
format if
output_format_json_array_of_rows
is enabled.
#32112
(
Frank Chen
).
Allow to parse
+
before
Float32
/
Float64
values.
#32079
(
Kruglov Pavel
).
Allow a user configured
hdfs_replication
parameter for
DiskHDFS
and
StorageHDFS
. Closes
#32039
.
#32049
(
leosunli
).
Added ClickHouse
exception
and
exception_code
fields to opentelemetry span log.
#32040
(
Frank Chen
).
Improve opentelemetry span log duration - it was is zero at the query level if there is a query exception.
#32038
(
Frank Chen
).
Fix the issue that
LowCardinality
of
Int256
cannot be created.
#31832
(
alexey-milovidov
).
Recreate
system.*_log
tables in case of different engine/partition_by.
#31824
(
Azat Khuzhin
).
MaterializedMySQL
: Fix issue with table named 'table'.
#31781
(
HΓ₯vard KvΓ₯len
).
ClickHouse dictionary source: support predefined connections. Closes
#31705
.
#31749
(
Kseniia Sumarokova
). | {"source_file": "2021.md"} | [
0.03022778034210205,
0.025202510878443718,
-0.0448743999004364,
-0.045202188193798065,
-0.07175162434577942,
-0.041672006249427795,
-0.07848769426345825,
0.07849645614624023,
-0.036786261945962906,
0.07026898115873337,
0.0346190519630909,
0.025192325934767723,
-0.0029858879279345274,
-0.05... |
5e0eb917-39c9-43c8-abeb-830f0d3eea30 | MaterializedMySQL
: Fix issue with table named 'table'.
#31781
(
HΓ₯vard KvΓ₯len
).
ClickHouse dictionary source: support predefined connections. Closes
#31705
.
#31749
(
Kseniia Sumarokova
).
Allow to use predefined connections configuration for Kafka and RabbitMQ engines (the same way as for other integration table engines).
#31691
(
Kseniia Sumarokova
).
Always re-render prompt while navigating history in clickhouse-client. This will improve usability of manipulating very long queries that don't fit on screen.
#31675
(
alexey-milovidov
) (author: Amos Bird).
Add key bindings for navigating through history (instead of lines/history).
#31641
(
Azat Khuzhin
).
Improve the
max_execution_time
checks. Fixed some cases when timeout checks do not happen and query could run too long.
#31636
(
RaΓΊl MarΓn
).
Better exception message when
users.xml
cannot be loaded due to bad password hash. This closes
#24126
.
#31557
(
Vitaly Baranov
).
Use shard and replica name from
Replicated
database arguments when expanding macros in
ReplicatedMergeTree
arguments if these macros are not defined in config. Closes
#31471
.
#31488
(
tavplubix
).
Better analysis for
min/max/count
projection. Now, with enabled
allow_experimental_projection_optimization
, virtual
min/max/count
projection can be used together with columns from partition key.
#31474
(
Amos Bird
).
Add
--pager
support for
clickhouse-local
.
#31457
(
Azat Khuzhin
).
Fix waiting of the editor during interactive query edition (
waitpid()
returns -1 on
SIGWINCH
and
EDITOR
and
clickhouse-local
/
clickhouse-client
works concurrently).
#31456
(
Azat Khuzhin
).
Throw an exception if there is some garbage after field in
JSONCompactStrings(EachRow)
format.
#31455
(
Kruglov Pavel
).
Default value of
http_send_timeout
and
http_receive_timeout
settings changed from 1800 (30 minutes) to 180 (3 minutes).
#31450
(
tavplubix
).
MaterializedMySQL
now handles
CREATE TABLE ... LIKE ...
DDL queries.
#31410
(
Stig Bakken
).
Return artificial create query when executing
show create table
on system's tables.
#31391
(
SuperDJY
).
Previously progress was shown only for
numbers
table function. Now for
numbers_mt
it is also shown.
#31318
(
Kseniia Sumarokova
).
Initial user's roles are used now to find row policies, see
#31080
.
#31262
(
Vitaly Baranov
).
If some obsolete setting is changed - show warning in
system.warnings
.
#31252
(
tavplubix
).
Improved backoff for background cleanup tasks in
MergeTree
. Settings
merge_tree_clear_old_temporary_directories_interval_seconds
and
merge_tree_clear_old_parts_interval_seconds
moved from users settings to merge tree settings.
#31180
(
tavplubix
).
Now every replica will send to client only incremental information about profile events counters.
#31155
(
Dmitry Novik
). This makes
--hardware_utilization
option in
clickhouse-client
usable. | {"source_file": "2021.md"} | [
0.03710184618830681,
-0.037375308573246,
-0.05915099382400513,
0.028651699423789978,
-0.12950022518634796,
-0.002363004023209214,
-0.05014443397521973,
-0.02355705015361309,
-0.013991019688546658,
0.07170141488313675,
-0.02020326443016529,
0.006361528765410185,
0.04857088625431061,
-0.0348... |
b1d7798a-b150-4c8b-b24f-9d5951848107 | Enable multiline editing in clickhouse-client by default. This addresses
#31121
.
#31123
(
Amos Bird
).
Function name normalization for
ALTER
queries. This helps avoid metadata mismatch between creating table with indices/projections and adding indices/projections via alter commands. This is a follow-up PR of https://github.com/ClickHouse/ClickHouse/pull/20174. Mark as improvements as there are no bug reports and the senario is somehow rare.
#31095
(
Amos Bird
).
Support
IF EXISTS
modifier for
RENAME DATABASE
/
TABLE
/
DICTIONARY
query. If this directive is used, one will not get an error if the DATABASE/TABLE/DICTIONARY to be renamed doesn't exist.
#31081
(
victorgao
).
Cancel vertical merges when partition is dropped. This is a follow-up of https://github.com/ClickHouse/ClickHouse/pull/25684 and https://github.com/ClickHouse/ClickHouse/pull/30996.
#31057
(
Amos Bird
).
The local session inside a ClickHouse dictionary source won't send its events to the session log anymore. This fixes a possible deadlock (tsan alert) on shutdown. Also this PR fixes flaky
test_dictionaries_dependency_xml/
.
#31013
(
Vitaly Baranov
).
Less locking in ALTER command.
#31010
(
Amos Bird
).
Fix
--verbose
option in clickhouse-local interactive mode and allow logging into file.
#30881
(
Kseniia Sumarokova
).
Added
\l
,
\d
,
\c
commands in
clickhouse-client
like in MySQL and PostgreSQL.
#30876
(
Pavel Medvedev
).
For clickhouse-local or clickhouse-client: if there is
--interactive
option with
--query
or
--queries-file
, then first execute them like in non-interactive and then start interactive mode.
#30851
(
Kseniia Sumarokova
).
Fix possible "The local set of parts of X doesn't look like the set of parts in ZooKeeper" error (if DROP fails during removing znodes from zookeeper).
#30826
(
Azat Khuzhin
).
Avro format works against Kafka. Setting
output_format_avro_rows_in_file
added.
#30351
(
Ilya Golshtein
).
Allow to specify one or any number of PostgreSQL schemas for one
MaterializedPostgreSQL
database. Closes
#28901
. Closes
#29324
.
#28933
(
Kseniia Sumarokova
).
Replaced default ports for clickhouse-keeper internal communication from 44444 to 9234. Fixes
#30879
.
#31799
(
alesapin
).
Implement function transform with Decimal arguments.
#31839
(
ζεΈ
).
Fix abort in debug server and
DB::Exception: std::out_of_range: basic_string
error in release server in case of bad hdfs url by adding additional check of hdfs url structure.
#31042
(
Kruglov Pavel
).
Fix possible assert in
hdfs
table function/engine, add test.
#31036
(
Kruglov Pavel
).
Bug Fixes {#bug-fixes}
Fix group by / order by / limit by aliases with positional arguments enabled. Closes
#31173
.
#31741
(
Kseniia Sumarokova
).
Fix usage of
Buffer
table engine with type
Map
. Fixes
#30546
.
#31742
(
Anton Popov
). | {"source_file": "2021.md"} | [
-0.018828708678483963,
-0.060557931661605835,
0.00002333460361114703,
0.032384615391492844,
-0.10572900623083115,
-0.05788009613752365,
0.017180850729346275,
-0.05900125950574875,
0.03108684904873371,
0.07609906047582626,
-0.014487194828689098,
-0.010995586402714252,
0.019289636984467506,
... |
1d832c0c-ddaa-48ca-a067-d5e9c1098758 | Fix usage of
Buffer
table engine with type
Map
. Fixes
#30546
.
#31742
(
Anton Popov
).
Fix reading from
MergeTree
tables with enabled
use_uncompressed_cache
.
#31826
(
Anton Popov
).
Fixed the behavior when mutations that have nothing to do are stuck (with enabled setting
empty_result_for_aggregation_by_empty_set
).
#32358
(
Nikita Mikhaylov
).
Fix skipping columns while writing protobuf. This PR fixes
#31160
, see the comment
#31160
#issuecomment-980595318.
#31988
(
Vitaly Baranov
).
Fix bug when remove unneeded columns in subquery. If there is an aggregation function in query without group by, do not remove if it is unneeded.
#32289
(
dongyifeng
).
Quota limit was not reached, but the limit was exceeded. This PR fixes
#31174
.
#31337
(
sunny
).
Fix SHOW GRANTS when partial revokes are used. This PR fixes
#31138
.
#31249
(
Vitaly Baranov
).
Memory amount was incorrectly estimated when ClickHouse is run in containers with cgroup limits.
#31157
(
Pavel Medvedev
).
Fix
ALTER ... MATERIALIZE COLUMN ...
queries in case when data type of default expression is not equal to the data type of column.
#32348
(
Anton Popov
).
Fixed crash with SIGFPE in aggregate function
avgWeighted
with
Decimal
argument. Fixes
#32053
.
#32303
(
tavplubix
).
Server might fail to start with
Cannot attach 1 tables due to cyclic dependencies
error if
Dictionary
table looks at XML-dictionary with the same name, it's fixed. Fixes
#31315
.
#32288
(
tavplubix
).
Fix parsing error while NaN deserializing for
Nullable(Float)
for
Quoted
escaping rule.
#32190
(
Kruglov Pavel
).
XML dictionaries: identifiers, used in table create query, can be qualified to
default_database
during upgrade to newer version. Closes
#31963
.
#32187
(
Maksim Kita
).
Number of active replicas might be determined incorrectly when inserting with quorum if setting
replicated_can_become_leader
is disabled on some replicas. It's fixed.
#32157
(
tavplubix
).
Dictionaries: fix cases when
{condition}
does not work for custom database queries.
#32117
(
Maksim Kita
).
Fix
CAST
from
Nullable
with
cast_keep_nullable
(
PARAMETER_OUT_OF_BOUND
error before for i.e.
toUInt32OrDefault(toNullable(toUInt32(1)))
).
#32080
(
Azat Khuzhin
).
Fix CREATE TABLE of Join Storage in some obscure cases. Close
#31680
.
#32066
(
SuperDJY
).
Fixed
Directory ... already exists and is not empty
error when detaching part.
#32063
(
tavplubix
).
MaterializedMySQL
(experimental feature): Fix misinterpretation of
DECIMAL
data from MySQL.
#31990
(
HΓ₯vard KvΓ₯len
).
FileLog
(experimental feature) engine unnesessary created meta data directory when create table failed. Fix
#31962
.
#31967
(
flynn
). | {"source_file": "2021.md"} | [
0.05140982195734978,
0.013854977674782276,
-0.0046788728795945644,
-0.013070830143988132,
-0.06656232476234436,
-0.0703534260392189,
-0.012578406371176243,
0.04683439061045647,
-0.021649759262800217,
0.007745023816823959,
0.0373929925262928,
-0.05086677893996239,
0.00004917388650937937,
-0... |
e7a56332-c3be-49bd-a4a4-1bcbfc0f8980 | FileLog
(experimental feature) engine unnesessary created meta data directory when create table failed. Fix
#31962
.
#31967
(
flynn
).
Some
GET_PART
entry might hang in replication queue if part is lost on all replicas and there are no other parts in the same partition. It's fixed in cases when partition key contains only columns of integer types or
Date[Time]
. Fixes
#31485
.
#31887
(
tavplubix
).
Fix functions
empty
and
notEmpty
with arguments of
UUID
type. Fixes
#31819
.
#31883
(
Anton Popov
).
Change configuration path from
keeper_server.session_timeout_ms
to
keeper_server.coordination_settings.session_timeout_ms
when constructing a
KeeperTCPHandler
. Same with
operation_timeout
.
#31859
(
JackyWoo
).
Fix invalid cast of Nullable type when nullable primary key is used. (Nullable primary key is a discouraged feature - please do not use). This fixes
#31075
.
#31823
(
Amos Bird
).
Fix crash in recursive UDF in SQL. Closes
#30856
.
#31820
(
Maksim Kita
).
Fix crash when function
dictGet
with type is used for dictionary attribute when type is
Nullable
. Fixes
#30980
.
#31800
(
Maksim Kita
).
Fix crash with empty result of ODBC query (with some ODBC drivers). Closes
#31465
.
#31766
(
Kseniia Sumarokova
).
Fix disabling query profiler (In case of
query_profiler_real_time_period_ns>0
/
query_profiler_cpu_time_period_ns>0
query profiler can stayed enabled even after query finished).
#31740
(
Azat Khuzhin
).
Fixed rare segfault on concurrent
ATTACH PARTITION
queries.
#31738
(
tavplubix
).
Fix race in JSONEachRowWithProgress output format when data and lines with progress are mixed in output.
#31736
(
Kruglov Pavel
).
Fixed
there are no such cluster here
error on execution of
ON CLUSTER
query if specified cluster name is name of
Replicated
database.
#31723
(
tavplubix
).
Fix exception on some of the applications of
decrypt
function on Nullable columns. This closes
#31662
. This closes
#31426
.
#31707
(
alexey-milovidov
).
Fixed function ngrams when string contains UTF-8 characters.
#31706
(
yandd
).
Settings
input_format_allow_errors_num
and
input_format_allow_errors_ratio
did not work for parsing of domain types, such as
IPv4
, it's fixed. Fixes
#31686
.
#31697
(
tavplubix
).
Fixed null pointer exception in
MATERIALIZE COLUMN
.
#31679
(
Nikolai Kochetov
).
RENAME TABLE
query worked incorrectly on attempt to rename an DDL dictionary in
Ordinary
database, it's fixed.
#31638
(
tavplubix
).
Implement
sparkbar
aggregate function as it was intended, see:
#26175
#issuecomment-960353867,
comment
.
#31624
(
ε°θ·―
).
Fix invalid generated JSON when only column names contain invalid UTF-8 sequences.
#31534
(
Kevin Michel
).
Disable
partial_merge_join_left_table_buffer_bytes
before bug in this optimization is fixed. See
#31009
). Remove redundant option
partial_merge_join_optimizations
.
#31528
(
Vladimir C
). | {"source_file": "2021.md"} | [
0.0030721735674887896,
-0.06087103113532066,
-0.022925591096282005,
0.013268792070448399,
0.01712358370423317,
-0.08059366792440414,
0.006047961767762899,
0.03608738258481026,
0.0016626855358481407,
0.09048397094011307,
0.02027096040546894,
-0.017361711710691452,
-0.004451530519872904,
0.0... |
e6141cf3-7aa4-4833-aa9a-8935694b4fdd | Disable
partial_merge_join_left_table_buffer_bytes
before bug in this optimization is fixed. See
#31009
). Remove redundant option
partial_merge_join_optimizations
.
#31528
(
Vladimir C
).
Fix progress for short
INSERT SELECT
queries.
#31510
(
Azat Khuzhin
).
Fix wrong behavior with group by and positional arguments. Closes
#31280
#issuecomment-968696186.
#31420
(
Kseniia Sumarokova
).
Resolve
nullptr
in STS credentials provider for S3.
#31409
(
Vladimir Chebotarev
).
Remove
notLike
function from index analysis, because it was wrong.
#31169
(
sundyli
).
Fix bug in Keeper which can lead to inability to start when some coordination logs was lost and we have more fresh snapshot than our latest log.
#31150
(
alesapin
).
Rewrite right distributed table in local join. solves
#25809
.
#31105
(
abel-cheng
).
Fix
Merge
table with aliases and where (it did not work before at all). Closes
#28802
.
#31044
(
Kseniia Sumarokova
).
Fix JSON_VALUE/JSON_QUERY with quoted identifiers. This allows to have spaces in json path. Closes
#30971
.
#31003
(
Kseniia Sumarokova
).
Using
formatRow
function with not row-oriented formats led to segfault. Don't allow to use this function with such formats (because it doesn't make sense).
#31001
(
Kruglov Pavel
).
Fix bug which broke select queries if they happened after dropping materialized view. Found in
#30691
.
#30997
(
Kseniia Sumarokova
).
Skip
max_partition_size_to_drop check
in case of ATTACH PARTITION ... FROM and MOVE PARTITION ...
#30995
(
Amr Alaa
).
Fix some corner cases with
INTERSECT
and
EXCEPT
operators. Closes
#30803
.
#30965
(
Kseniia Sumarokova
).
Build/Testing/Packaging Improvement {#buildtestingpackaging-improvement}
Fix incorrect filtering result on non-x86 builds. This closes
#31417
. This closes
#31524
.
#31574
(
alexey-milovidov
).
Make ClickHouse build fully reproducible (byte identical on different machines). This closes
#22113
.
#31899
(
alexey-milovidov
). Remove filesystem path to the build directory from binaries to enable reproducible builds. This needed for
#22113
.
#31838
(
alexey-milovidov
).
Use our own CMakeLists for
zlib-ng
,
cassandra
,
mariadb-connector-c
and
xz
,
re2
,
sentry
,
gsasl
,
arrow
,
protobuf
. This is needed for
#20151
. Part of
#9226
. A small step towards removal of annoying trash from the build system.
#30599
(
alexey-milovidov
).
Hermetic builds: use fixed version of libc and make sure that no source or binary files from the host OS are using during build. This closes
#27133
. This closes
#21435
. This closes
#30462
.
#30011
(
alexey-milovidov
).
Adding function
getFuzzerData()
to easily fuzz particular functions. This closes
#23227
.
#27526
(
Alexey Boykov
).
More correct setting up capabilities inside Docker.
#31802
(
Constantine Peresypkin
). | {"source_file": "2021.md"} | [
-0.0021768345031887293,
0.004279484506696463,
0.004651643335819244,
0.03672128915786743,
-0.023363538086414337,
-0.05579672381281853,
0.034517381340265274,
-0.03798864781856537,
0.01242635864764452,
0.02706308476626873,
0.04080049321055412,
-0.00031373088131658733,
0.023984719067811966,
-0... |
fc59b7d6-c625-4231-8145-d5238995854a | More correct setting up capabilities inside Docker.
#31802
(
Constantine Peresypkin
).
Enable clang
-fstrict-vtable-pointers
,
-fwhole-program-vtables
compile options.
#20151
(
Maksim Kita
).
Avoid downloading toolchain tarballs for cross-compiling for FreeBSD.
#31672
(
alexey-milovidov
).
Initial support for risc-v. See development/build-cross-riscv for quirks and build command that was tested.
#31309
(
Vladimir Smirnov
).
Support compile in arm machine with parameter "-DENABLE_TESTS=OFF".
#31007
(
zhanghuajie
).
ClickHouse release v21.11, 2021-11-09 {#clickhouse-release-v2111-2021-11-09}
Backward Incompatible Change {#backward-incompatible-change-1}
Change order of json_path and json arguments in SQL/JSON functions (to be consistent with the standard). Closes
#30449
.
#30474
(
Kseniia Sumarokova
).
Remove
MergeTree
table setting
write_final_mark
. It will be always
true
.
#30455
(
Kseniia Sumarokova
). No actions required, all tables are compatible with the new version.
Function
bayesAB
is removed. Please help to return this function back, refreshed. This closes
#26233
.
#29934
(
alexey-milovidov
).
This is relevant only if you already started using the experimental
clickhouse-keeper
support. Now ClickHouse Keeper snapshots compressed with
ZSTD
codec by default instead of custom ClickHouse LZ4 block compression. This behavior can be turned off with
compress_snapshots_with_zstd_format
coordination setting (must be equal on all quorum replicas). Backward incompatibility is quite rare and may happen only when new node will send snapshot (happens in case of recovery) to the old node which is unable to read snapshots in ZSTD format.
#29417
(
alesapin
).
New Feature {#new-feature-1}
New asynchronous INSERT mode allows to accumulate inserted data and store it in a single batch in background. On client it can be enabled by setting
async_insert
for
INSERT
queries with data inlined in query or in separate buffer (e.g. for
INSERT
queries via HTTP protocol). If
wait_for_async_insert
is true (by default) the client will wait until data will be flushed to table. On server-side it controlled by the settings
async_insert_threads
,
async_insert_max_data_size
and
async_insert_busy_timeout_ms
. Implements
#18282
.
#27537
(
Anton Popov
).
#20557
(
Ivan
). Notes on performance: with asynchronous inserts you can do up to around 10 000 individual INSERT queries per second, so it is still recommended to insert in batches if you want to achieve performance up to millions inserted rows per second.
Add interactive mode for
clickhouse-local
. So, you can just run
clickhouse-local
to get a command line ClickHouse interface without connecting to a server and process data from files and external data sources. Also merge the code of
clickhouse-client
and
clickhouse-local
together. Closes
#7203
. Closes
#25516
. Closes
#22401
.
#26231
(
Kseniia Sumarokova
). | {"source_file": "2021.md"} | [
-0.032925281673669815,
0.019597966223955154,
-0.005758232437074184,
-0.04817231744527817,
-0.004790032282471657,
-0.07184154540300369,
-0.10613343864679337,
0.04235631972551346,
-0.06062975525856018,
0.04553370177745819,
-0.01557563804090023,
-0.08104128390550613,
-0.045570217072963715,
-0... |
c1e4ed12-fa46-496d-bce9-90edde245107 | Added support for executable (scriptable) user defined functions. These are UDFs that can be written in any programming language.
#28803
(
Maksim Kita
).
Allow predefined connections to external data sources. This allows to avoid specifying credentials or addresses while using external data sources, they can be referenced by names instead. Closes
#28367
.
#28577
(
Kseniia Sumarokova
).
Added
INFORMATION_SCHEMA
database with
SCHEMATA
,
TABLES
,
VIEWS
and
COLUMNS
views to the corresponding tables in
system
database. Closes
#9770
.
#28691
(
tavplubix
).
Support
EXISTS (subquery)
. Closes
#6852
.
#29731
(
Kseniia Sumarokova
).
Session logging for audit. Logging all successful and failed login and logout events to a new
system.session_log
table.
#22415
(
Vasily Nemkov
) (
Vitaly Baranov
).
Support multidimensional cosine distance and euclidean distance functions; L1, L2, Lp, Linf distances and norms. Scalar product on tuples and various arithmetic operators on tuples. This fully closes
#4509
and even more.
#27933
(
Alexey Boykov
).
Add support for compression and decompression for
INTO OUTFILE
and
FROM INFILE
(with autodetect or with additional optional parameter).
#27135
(
Filatenkov Artur
).
Add CORS (Cross Origin Resource Sharing) support with HTTP
OPTIONS
request. It means, now Grafana will work with serverless requests without a kludges. Closes
#18693
.
#29155
(
Filatenkov Artur
).
Queries with JOIN ON now supports disjunctions (OR).
#21320
(
Ilya Golshtein
).
Added function
tokens
. That allow to split string into tokens using non-alpha numeric ASCII characters as separators.
#29981
(
Maksim Kita
). Added function
ngrams
to extract ngrams from text. Closes
#29699
.
#29738
(
Maksim Kita
).
Add functions for Unicode normalization:
normalizeUTF8NFC
,
normalizeUTF8NFD
,
normalizeUTF8NFKC
,
normalizeUTF8NFKD
functions.
#28633
(
darkkeks
).
Streaming consumption of application log files in ClickHouse with
FileLog
table engine. It's like
Kafka
or
RabbitMQ
engine but for append-only and rotated logs in local filesystem. Closes
#6953
.
#25969
(
flynn
) (
Kseniia Sumarokova
).
Add
CapnProto
output format, refactor
CapnProto
input format.
#29291
(
Kruglov Pavel
).
Allow to write number in query as binary literal. Example
SELECT 0b001;
.
#29304
(
Maksim Kita
).
Added
hashed_array
dictionary type. It saves memory when using dictionaries with multiple attributes. Closes
#30236
.
#30242
(
Maksim Kita
).
Added
JSONExtractKeys
function.
#30056
(
Vitaly
).
Add a function
getOSKernelVersion
- it returns a string with OS kernel version.
#29755
(
Memo
).
Added
MD4
and
SHA384
functions. MD4 is an obsolete and insecure hash function, it can be used only in rare cases when MD4 is already being used in some legacy system and you need to get exactly the same result.
#29602
(
Nikita Tikhomirov
). | {"source_file": "2021.md"} | [
-0.029718469828367233,
-0.07633236795663834,
-0.12616916000843048,
-0.09126683324575424,
0.004117307718843222,
-0.056172966957092285,
0.03676287457346916,
0.05326620861887932,
-0.027477674186229706,
0.02108021453022957,
0.03284823149442673,
-0.020330945029854774,
0.027417343109846115,
0.03... |
c159a7eb-654a-432f-bff3-8db36ae69420 | HSTS can be enabled for ClickHouse HTTP server by setting
hsts_max_age
in configuration file with a positive number.
#29516
(
εζΆ
).
Huawei OBS Storage support. Closes
#24294
.
#29511
(
kevin wan
).
New function
mapContainsKeyLike
to get the map that key matches a simple regular expression.
#29471
(
εζΆ
). New function
mapExtractKeyLike
to get the map only kept elements matched specified pattern.
#30793
(
εζΆ
).
Implemented
ALTER TABLE x MODIFY COMMENT
.
#29264
(
Vasily Nemkov
).
Adds H3 inspection functions that are missing from ClickHouse but are available via the H3 api: https://h3geo.org/docs/api/inspection.
#29209
(
Bharat Nallan
).
Allow non-replicated ALTER TABLE FETCH and ATTACH in Replicated databases.
#29202
(
Kevin Michel
).
Added a setting
output_format_csv_null_representation
: This is the same as
output_format_tsv_null_representation
but is for CSV output.
#29123
(
PHO
).
Added function
zookeeperSessionUptime()
which returns uptime of current ZooKeeper session in seconds.
#28983
(
tavplubix
).
Implements the
h3ToGeoBoundary
function.
#28952
(
Ivan Veselov
).
Add aggregate function
exponentialMovingAverage
that can be used as window function. This closes
#27511
.
#28914
(
alexey-milovidov
).
Allow to include subcolumns of table columns into
DESCRIBE
query result (can be enabled by setting
describe_include_subcolumns
).
#28905
(
Anton Popov
).
Executable
,
ExecutablePool
added option
send_chunk_header
. If this option is true then chunk rows_count with line break will be sent to client before chunk.
#28833
(
Maksim Kita
).
tokenbf_v1
and
ngram
support Map with key of String of FixedSring type. It enhance data skipping in query with map key filter.
sql CREATE TABLE map_tokenbf ( row_id UInt32, map Map(String, String), INDEX map_tokenbf map TYPE ngrambf_v1(4,256,2,0) GRANULARITY 1 ) Engine=MergeTree() Order by id
With table above, the query
select * from map_tokebf where map['K']='V'
will skip the granule that doesn't contain key
A
. Of course, how many rows will skipped is depended on the
granularity
and
index_granularity
you set.
#28511
(
εζΆ
).
Send profile events from server to client. New packet type
ProfileEvents
was introduced. Closes
#26177
.
#28364
(
Dmitry Novik
).
Bit shift operations for
FixedString
and
String
data types. This closes
#27763
.
#28325
(
ε°θ·―
).
Support adding / deleting tables to replication from PostgreSQL dynamically in database engine MaterializedPostgreSQL. Support alter for database settings. Closes
#27573
.
#28301
(
Kseniia Sumarokova
).
Added function accurateCastOrDefault(x, T). Closes
#21330
. Authors @taiyang-li.
#23028
(
Maksim Kita
).
Add Function
toUUIDOrDefault
,
toUInt8/16/32/64/256OrDefault
,
toInt8/16/32/64/128/256OrDefault
, which enables user defining default value(not null) when string parsing is failed.
#21330
(
taiyang-li
). | {"source_file": "2021.md"} | [
-0.012318771332502365,
0.02132522314786911,
-0.033127572387456894,
-0.023389732465147972,
0.03453991934657097,
-0.05666230618953705,
-0.01442568376660347,
-0.0709405466914177,
-0.05726048722863197,
0.0001372877013636753,
0.09523963183164597,
-0.04714246466755867,
0.0905497670173645,
-0.048... |
a460a2a6-438f-4fda-a9ce-9621ce46ca56 | Performance Improvement {#performance-improvement-1}
Background merges can be preempted by each other and they are scheduled with appropriate priorities. Now long running merges won't prevent short merges to proceed. This is needed for a better scheduling and controlling of merges execution. It reduces the chances to get "too many parts" error.
#22381
.
#25165
(
Nikita Mikhaylov
). Added an ability to execute more merges and mutations than the number of threads in background pool. Merges and mutations will be executed step by step according to their sizes (lower is more prioritized). The ratio of the number of tasks to threads to execute is controlled by a setting
background_merges_mutations_concurrency_ratio
, 2 by default.
#29140
(
Nikita Mikhaylov
).
Allow to use asynchronous reads for remote filesystems. Lower the number of seeks while reading from remote filesystems. It improves performance tremendously and makes the experimental
web
and
s3
disks to work faster than EBS under certain conditions.
#29205
(
Kseniia Sumarokova
). In the meantime, the
web
disk type (static dataset hosted on a web server) is graduated from being experimental to be production ready.
Queries with
INTO OUTFILE
in
clickhouse-client
will use multiple threads. Fix the issue with flickering progress-bar when using
INTO OUTFILE
. This closes
#30873
. This closes
#30872
.
#30886
(
alexey-milovidov
).
Reduce amount of redundant compressed data read from disk for some types
SELECT
queries (only for
MergeTree
engines family).
#30111
(
alesapin
).
Remove some redundant
seek
calls while reading compressed blocks in MergeTree table engines family.
#29766
(
alesapin
).
Make
url
table function to process multiple URLs in parallel. This closes
#29670
and closes
#29671
.
#29673
(
alexey-milovidov
).
Improve performance of aggregation in order of primary key (with enabled setting
optimize_aggregation_in_order
).
#30266
(
Anton Popov
).
Now clickhouse is using DNS cache while communicating with external S3.
#29999
(
alesapin
).
Add support for pushdown of
IS NULL
/
IS NOT NULL
to external databases (i.e. MySQL).
#29463
(
Azat Khuzhin
). Transform
isNull
/
isNotNull
to
IS NULL
/
IS NOT NULL
(for external dbs, i.e. MySQL).
#29446
(
Azat Khuzhin
).
SELECT queries from Dictionary tables will use multiple threads.
#30500
(
Maksim Kita
).
Improve performance for filtering (WHERE operation) of
Decimal
columns.
#30431
(
Jun Jin
).
Remove branchy code in filter operation with a better implementation with popcnt/ctz which have better performance.
#29881
(
Jun Jin
).
Improve filter bytemask generator (used for WHERE operator) function all in one with SSE/AVX2/AVX512 instructions. Note that by default ClickHouse is only using SSE, so it's only relevant for custom builds.
#30014
(
jasperzhu
).
#30670
(
jasperzhu
). | {"source_file": "2021.md"} | [
-0.048034701496362686,
-0.03235254064202309,
-0.03432714194059372,
-0.043252021074295044,
0.025800660252571106,
-0.1182892844080925,
-0.038247060030698776,
0.03268124535679817,
0.029607953503727913,
-0.006626324262470007,
0.031170593574643135,
0.021432658657431602,
0.01116879004985094,
-0.... |
efb09f51-268b-4832-aa06-7651c4c1b73d | Improve the performance of SUM aggregate function of Nullable floating point numbers.
#28906
(
RaΓΊl MarΓn
).
Speed up part loading process with multiple disks are in use. The idea is similar to https://github.com/ClickHouse/ClickHouse/pull/16423 . Prod env shows improvement: 24 min -> 16 min .
#28363
(
Amos Bird
).
Reduce default settings for S3 multipart upload part size to lower memory usage.
#28679
(
ianton-ru
).
Speed up
bitmapAnd
function.
#28332
(
dddounaiking
).
Removed sub-optimal mutation notifications in
StorageMergeTree
when merges are still going.
#27552
(
Vladimir Chebotarev
).
Attempt to improve performance of string comparison.
#28767
(
alexey-milovidov
).
Primary key index and partition filter can work in tuple.
#29281
(
εζΆ
).
If query has multiple quantile aggregate functions with the same arguments but different level parameter, they will be fused together and executed in one pass if the setting
optimize_syntax_fuse_functions
is enabled.
#26657
(
hexiaoting
).
Now min-max aggregation over the first expression of primary key is optimized by projection. This is for
#329
.
#29918
(
Amos Bird
).
Experimental Feature {#experimental-feature-1}
Add ability to change nodes configuration (in
.xml
file) for ClickHouse Keeper.
#30372
(
alesapin
).
Add
sparkbar
aggregate function. This closes
#26175
.
#27481
(
ε°θ·―
). Note: there is one flaw in this function, the behaviour will be changed in future releases.
Improvement {#improvement-1}
Allow user to change log levels without restart.
#29586
(
Nikolay Degterinsky
).
Multiple improvements for SQL UDF. Queries for manipulation of SQL User Defined Functions now support ON CLUSTER clause. Example
CREATE FUNCTION test_function ON CLUSTER 'cluster' AS x -> x + 1;
. Closes
#30666
.
#30734
(
Maksim Kita
). Support
CREATE OR REPLACE
,
CREATE IF NOT EXISTS
syntaxes.
#30454
(
Maksim Kita
). Added DROP IF EXISTS support. Example
DROP FUNCTION IF EXISTS test_function
.
#30437
(
Maksim Kita
). Support lambdas. Example
CREATE FUNCTION lambda_function AS x -> arrayMap(element -> element * 2, x);
.
#30435
(
Maksim Kita
). Support SQL user defined functions for
clickhouse-local
.
#30179
(
Maksim Kita
).
Enable per-query memory profiler (set to
memory_profiler_step
= 4MiB) globally.
#29455
(
Azat Khuzhin
).
Added columns
data_compressed_bytes
,
data_uncompressed_bytes
,
marks_bytes
into
system.data_skipping_indices
. Added columns
secondary_indices_compressed_bytes
,
secondary_indices_uncompressed_bytes
,
secondary_indices_marks_bytes
into
system.parts
. Closes
#29697
.
#29896
(
Maksim Kita
).
Add
table
alias to system.tables and
database
alias to system.databases
#29677
.
#29882
(
kevin wan
).
Correctly resolve interdependencies between tables on server startup. Closes
#8004
, closes
#15170
.
#28373
(
tavplubix
). | {"source_file": "2021.md"} | [
-0.07238751649856567,
0.0194503515958786,
-0.03847293183207512,
-0.0060980613343417645,
-0.040236640721559525,
-0.09600713104009628,
0.007745863404124975,
0.03381378948688507,
-0.011584948748350143,
-0.02956020087003708,
0.008331925608217716,
-0.04645376652479172,
0.005799017380923033,
-0.... |
9cd86c23-c48b-43fb-8a75-6685a95fc11e | Correctly resolve interdependencies between tables on server startup. Closes
#8004
, closes
#15170
.
#28373
(
tavplubix
).
Avoid error "Division by zero" when denominator is Nullable in functions
divide
,
intDiv
and
modulo
. Closes
#22621
.
#28352
(
Kruglov Pavel
).
Allow to parse values of
Date
data type in text formats as
YYYYMMDD
in addition to
YYYY-MM-DD
. This closes
#30870
.
#30871
(
alexey-milovidov
).
Web UI: render bars in table cells.
#29792
(
alexey-milovidov
).
User can now create dictionaries with comments:
CREATE DICTIONARY ... COMMENT 'vaue'
...
#29899
(
Vasily Nemkov
). Users now can set comments to database in
CREATE DATABASE
statement ...
#29429
(
Vasily Nemkov
).
Introduce
compiled_expression_cache_elements_size
setting. If you will ever want to use this setting, you will already know what it does.
#30667
(
Maksim Kita
).
clickhouse-format now supports option
--query
. In previous versions you have to pass the query to stdin.
#29325
(
εζΆ
).
Support
ALTER TABLE
for tables in
Memory
databases. Memory databases are used in
clickhouse-local
.
#30866
(
tavplubix
).
Arrays of all serializable types are now supported by
arrayStringConcat
.
#30840
(
Nickita Taranov
).
ClickHouse now will account docker/cgroups limitations to get system memory amount. See
#25662
.
#30574
(
Pavel Medvedev
).
Fetched table structure for PostgreSQL database is more reliable now.
#30477
(
Kseniia Sumarokova
).
Full support of positional arguments in GROUP BY and ORDER BY.
#30433
(
Kseniia Sumarokova
).
Allow extracting non-string element as string using JSONExtractString. This is for
pull/25452#issuecomment-927123287
.
#30426
(
Amos Bird
).
Added an ability to use FINAL clause in SELECT queries from
GraphiteMergeTree
.
#30360
(
Nikita Mikhaylov
).
Minor improvements in replica cloning and enqueuing fetch for broken parts, that should avoid extremely rare hanging of
GET_PART
entries in replication queue.
#30346
(
tavplubix
).
Allow symlinks to files in
user_files
directory for file table function.
#30309
(
Kseniia Sumarokova
).
Fixed comparison of
Date32
with
Date
,
DateTime
,
DateTime64
and
String
.
#30219
(
liang.huang
).
Allow to remove
SAMPLE BY
expression from
MergeTree
tables (
ALTER TABLE <table> REMOVE SAMPLE BY
).
#30180
(
Anton Popov
).
Now
Keeper
(as part of
clickhouse-server
) will start asynchronously if it can connect to some other node.
#30170
(
alesapin
).
Now
clickhouse-client
supports native multi-line editing.
#30143
(
Amos Bird
).
polygon
dictionaries (reverse geocoding): added support for reading the dictionary content with SELECT query method if setting
store_polygon_key_column
= true. Closes
#30090
.
#30142
(
Maksim Kita
).
Add ClickHouse logo to Play UI.
#29674
(
alexey-milovidov
). | {"source_file": "2021.md"} | [
0.011489931493997574,
-0.021851571276783943,
-0.11323944479227066,
0.04165215045213699,
-0.0846337378025055,
-0.03999476134777069,
-0.005587947089225054,
0.06995750218629837,
-0.04682078957557678,
0.007397015113383532,
0.0689164400100708,
-0.07257486134767532,
0.03665662556886673,
-0.02446... |
00942686-1f95-45a5-9ef5-54e2f58cbd0c | Add ClickHouse logo to Play UI.
#29674
(
alexey-milovidov
).
Better exception message while reading column from Arrow-supported formats like
Arrow
,
ArrowStream
,
Parquet
and
ORC
. This closes
#29926
.
#29927
(
alexey-milovidov
).
Fix data-race between flush and startup in
Buffer
tables. This can appear in tests.
#29930
(
Azat Khuzhin
).
Fix
lock-order-inversion
between
DROP TABLE
for
DatabaseMemory
and
LiveView
. Live View is an experimental feature. Memory database is used in clickhouse-local.
#29929
(
Azat Khuzhin
).
Fix lock-order-inversion between periodic dictionary reload and config reload.
#29928
(
Azat Khuzhin
).
Update zoneinfo files to 2021c.
#29925
(
alexey-milovidov
).
Add ability to configure retries and delays between them for
clickhouse-copier
.
#29921
(
Azat Khuzhin
).
Add
shutdown_wait_unfinished_queries
server setting to allowing waiting for running queries up to
shutdown_wait_unfinished
time. This is for
#24451
.
#29914
(
Amos Bird
).
Add ability to trace peak memory usage (with new trace_type in
system.trace_log
-
MemoryPeak
).
#29858
(
Azat Khuzhin
).
PostgreSQL foreign tables: Added partitioned table prefix 'p' for the query for fetching replica identity index.
#29828
(
Shoh Jahon
).
Apply
max_untracked_memory
/
memory_profiler_step
/
memory_profiler_sample_probability
during mutate/merge to profile memory usage during merges.
#29681
(
Azat Khuzhin
).
Query obfuscator:
clickhouse-format --obfuscate
now works with more types of queries.
#29672
(
alexey-milovidov
).
Fixed the issue:
clickhouse-format --obfuscate
cannot process queries with embedded dictionaries (functions
regionTo...
).
#29667
(
alexey-milovidov
).
Fix incorrect Nullable processing of JSON functions. This fixes
#29615
. Mark as improvement because https://github.com/ClickHouse/ClickHouse/pull/28012 is not released.
#29659
(
Amos Bird
).
Increase
listen_backlog
by default (to match default in newer linux kernel).
#29643
(
Azat Khuzhin
).
Reload dictionaries, models, user defined executable functions if servers config
dictionaries_config
,
models_config
,
user_defined_executable_functions_config
changes. Closes
#28142
.
#29529
(
Maksim Kita
).
Get rid of pointless restriction on projection name. Now projection name can start with
tmp_
.
#29520
(
Amos Bird
).
Fixed
There is no query or query context has expired
error in mutations with nested subqueries. Do not allow subqueries in mutation if table is replicated and
allow_nondeterministic_mutations
setting is disabled.
#29495
(
tavplubix
).
Apply config changes to
max_concurrent_queries
during runtime (no need to restart).
#29414
(
RaΓΊl MarΓn
).
Added setting
use_skip_indexes
.
#29405
(
Maksim Kita
).
Add support for
FREEZE
ing in-memory parts (for backups).
#29376
(
Mo Xuan
). | {"source_file": "2021.md"} | [
0.012506689876317978,
-0.07999056577682495,
-0.06900973618030548,
-0.09610521048307419,
-0.01965942233800888,
0.02902693673968315,
-0.005277537740767002,
-0.058277685195207596,
-0.04684308543801308,
0.06771126389503479,
0.03860566020011902,
0.03592392057180405,
-0.028432520106434822,
-0.00... |
a38efa90-2093-47f7-af56-48aea1ec41f1 | Added setting
use_skip_indexes
.
#29405
(
Maksim Kita
).
Add support for
FREEZE
ing in-memory parts (for backups).
#29376
(
Mo Xuan
).
Pass through initial query_id for
clickhouse-benchmark
(previously if you run remote query via
clickhouse-benchmark
, queries on shards will not be linked to the initial query via
initial_query_id
).
#29364
(
Azat Khuzhin
).
Skip indexes
tokenbf_v1
and
ngrambf_v1
: added support for
Array
data type with key of
String
of
FixedString
type.
#29280
(
Maksim Kita
). Skip indexes
tokenbf_v1
and
ngrambf_v1
added support for
Map
data type with key of
String
of
FixedString
type. Author @lingtaolf.
#29220
(
Maksim Kita
).
Function
has
: added support for
Map
data type.
#29267
(
Maksim Kita
).
Add
compress_logs
settings for clickhouse-keeper which allow to compress clickhouse-keeper logs (for replicated state machine) in
ZSTD
. Implements:
#26977
.
#29223
(
alesapin
).
Add a setting
external_table_strict_query
- it will force passing the whole WHERE expression in queries to foreign databases even if it is incompatible.
#29206
(
Azat Khuzhin
).
Disable projections when
ARRAY JOIN
is used. In previous versions projection analysis may break aliases in array join.
#29139
(
Amos Bird
).
Support more types in
MsgPack
input/output format.
#29077
(
Kruglov Pavel
).
Allow to input and output
LowCardinality
columns in
ORC
input/output format.
#29062
(
Kruglov Pavel
).
Select from
system.distributed_ddl_queue
might show incorrect values, it's fixed.
#29061
(
tavplubix
).
Correct behaviour with unknown methods for HTTP connection. Solves
#29050
.
#29057
(
Filatenkov Artur
).
clickhouse-keeper
: Fix bug in
clickhouse-keeper-converter
which can lead to some data loss while restoring from ZooKeeper logs (not snapshot).
#29030
(
ε°θ·―
). Fix bug in
clickhouse-keeper-converter
which can lead to incorrect ZooKeeper log deserialization.
#29071
(
ε°θ·―
).
Apply settings from
CREATE ... AS SELECT
queries (fixes:
#28810
).
#28962
(
Azat Khuzhin
).
Respect default database setting for ALTER TABLE ... ON CLUSTER ... REPLACE/MOVE PARTITION FROM/TO ...
#28955
(
anneji-dev
).
gRPC protocol: Allow change server-side compression from client.
#28953
(
Vitaly Baranov
).
Skip "no data" exception when reading thermal sensors for asynchronous metrics. This closes
#28852
.
#28882
(
alexey-milovidov
).
Fixed logical race condition that might cause
Dictionary not found
error for existing dictionary in rare cases.
#28853
(
tavplubix
).
Relax nested function for If-combinator check (but forbid nested identical combinators).
#28828
(
Azat Khuzhin
).
Fix possible uncaught exception during server termination.
#28761
(
Azat Khuzhin
).
Forbid cleaning of tmp directories that can be used by an active mutation/merge if mutation/merge is extraordinarily long.
#28760
(
Azat Khuzhin
). | {"source_file": "2021.md"} | [
-0.0379834808409214,
0.05071355029940605,
-0.042447641491889954,
0.05626681074500084,
-0.01034262403845787,
-0.06111162528395653,
0.007777824532240629,
0.0025660586543381214,
-0.03125299885869026,
0.01853022538125515,
-0.016011711210012436,
0.008804465644061565,
0.07385604083538055,
-0.054... |
821430c5-1f06-42d6-93dd-e6b3aa81b0b0 | Forbid cleaning of tmp directories that can be used by an active mutation/merge if mutation/merge is extraordinarily long.
#28760
(
Azat Khuzhin
).
Allow optimization
optimize_arithmetic_operations_in_aggregate_functions = 1
when alias is used.
#28746
(
Amos Bird
).
Implement
detach_not_byte_identical_parts
setting for
ReplicatedMergeTree
, that will detach instead of remove not byte-identical parts (after mege/mutate).
#28708
(
Azat Khuzhin
).
Implement
max_suspicious_broken_parts_bytes
setting for
MergeTree
(to limit total size of all broken parts, default is
1GiB
).
#28707
(
Azat Khuzhin
).
Enable expanding macros in
RabbitMQ
table settings.
#28683
(
Vitaly Baranov
).
Restore the possibility to read data of a table using the
Log
engine in multiple threads.
#28125
(
Vitaly Baranov
).
Fix misbehavior of NULL column handling in JSON functions. This fixes
#27930
.
#28012
(
Amos Bird
).
Allow to set the size of Mark/Uncompressed cache for skip indices separately from columns.
#27961
(
Amos Bird
).
Allow to mix JOIN with
USING
with other JOIN types.
#23881
(
darkkeks
).
Update aws-sdk submodule for throttling in Yandex Cloud S3.
#30646
(
ianton-ru
).
Fix releasing query ID and session ID at the end of query processing while handing gRPC call.
#29954
(
Vitaly Baranov
).
Fix shutdown of
AccessControlManager
to fix flaky test.
#29951
(
Vitaly Baranov
).
Fix failed assertion in reading from
HDFS
. Update libhdfs3 library to be able to run in tests in debug. Closes
#29251
. Closes
#27814
.
#29276
(
Kseniia Sumarokova
).
Build/Testing/Packaging Improvement {#buildtestingpackaging-improvement-1}
Add support for FreeBSD builds for Aarch64 machines.
#29952
(
MikaelUrankar
).
Recursive submodules are no longer needed for ClickHouse.
#30315
(
alexey-milovidov
).
ClickHouse can be statically built with Musl. This is added as experiment, it does not support building
odbc-bridge
,
library-bridge
, integration with CatBoost and some libraries.
#30248
(
alexey-milovidov
).
Enable
Protobuf
,
Arrow
,
ORC
,
Parquet
for
AArch64
and
Darwin
(macOS) builds. This closes
#29248
. This closes
#28018
.
#30015
(
alexey-milovidov
).
Add cross-build for PowerPC (powerpc64le). This closes
#9589
. Enable support for interaction with MySQL for AArch64 and PowerPC. This closes
#26301
.
#30010
(
alexey-milovidov
).
Leave only required files in cross-compile toolchains. Include them as submodules (earlier they were downloaded as tarballs).
#29974
(
alexey-milovidov
).
Implemented structure-aware fuzzing approach in ClickHouse for select statement parser.
#30012
(
Paul
).
Turning on experimental constexpr expressions evaluator for clang to speed up template code compilation.
#29668
(
myrrc
).
Add ability to compile using newer version fo glibc without using new symbols.
#29594
(
Azat Khuzhin
). | {"source_file": "2021.md"} | [
-0.001464693690650165,
0.002260626060888171,
0.027462568134069443,
-0.03217918425798416,
-0.0670458972454071,
-0.14596423506736755,
0.010998616926372051,
0.046848222613334656,
-0.04349840432405472,
0.10440867394208908,
0.06539003551006317,
0.007820586673915386,
0.008231999352574348,
0.0098... |
282b4151-f7ba-4e79-a7b8-8548f9110817 | Add ability to compile using newer version fo glibc without using new symbols.
#29594
(
Azat Khuzhin
).
Reduce Debug build binary size by clang optimization option.
#28736
(
flynn
).
Now all images for CI will be placed in the separate dockerhub repo.
#28656
(
alesapin
).
Improve support for build with clang-13.
#28046
(
Sergei Semin
).
Add ability to print raw profile events to
clickhouse-client
(This can be useful for debugging and for testing).
#30064
(
Azat Khuzhin
).
Add time dependency for clickhouse-server unit (systemd and sysvinit init).
#28891
(
Azat Khuzhin
).
Reload stacktrace cache when symbol is reloaded.
#28137
(
Amos Bird
).
Bug Fix {#bug-fix}
Functions for case-insensitive search in UTF-8 strings like
positionCaseInsensitiveUTF8
and
countSubstringsCaseInsensitiveUTF8
might find substrings that actually does not match in very rare cases, it's fixed.
#30663
(
tavplubix
).
Fix reading from empty file on encrypted disk.
#30494
(
Vitaly Baranov
).
Fix transformation of disjunctions chain to
IN
(controlled by settings
optimize_min_equality_disjunction_chain_length
) in distributed queries with settings
legacy_column_name_of_tuple_literal = 0
.
#28658
(
Anton Popov
).
Allow using a materialized column as the sharding key in a distributed table even if
insert_allow_materialized_columns=0
:.
#28637
(
Vitaly Baranov
).
Fix
ORDER BY ... WITH FILL
with set
TO
and
FROM
and no rows in result set.
#30888
(
Anton Popov
).
Fix set index not used in AND/OR expressions when there are more than two operands. This fixes
#30416
.
#30887
(
Amos Bird
).
Fix crash when projection with hashing function is materialized. This fixes
#30861
. The issue is similar to https://github.com/ClickHouse/ClickHouse/pull/28560 which is a lack of proper understanding of the invariant of header's emptyness.
#30877
(
Amos Bird
).
Fixed ambiguity when extracting auxiliary ZooKeeper name from ZooKeeper path in
ReplicatedMergeTree
. Previously server might fail to start with
Unknown auxiliary ZooKeeper name
if ZooKeeper path contains a colon. Fixes
#29052
. Also it was allowed to specify ZooKeeper path that does not start with slash, but now it's deprecated and creation of new tables with such path is not allowed. Slashes and colons in auxiliary ZooKeeper names are not allowed too.
#30822
(
tavplubix
).
Clean temporary directory when localBackup failed by some reason.
#30797
(
ianton-ru
).
Fixed a race condition between
REPLACE/MOVE PARTITION
and background merge in non-replicated
MergeTree
that might cause a part of moved/replaced data to remain in partition. Fixes
#29327
.
#30717
(
tavplubix
).
Fix PREWHERE with WHERE in case of always true PREWHERE.
#30668
(
Azat Khuzhin
).
Limit push down optimization could cause a error
Cannot find column
. Fixes
#30438
.
#30562
(
Nikolai Kochetov
). | {"source_file": "2021.md"} | [
-0.025690769776701927,
0.05901111289858818,
-0.013754388317465782,
-0.03789624199271202,
-0.013508842326700687,
-0.06533196568489075,
-0.038250651210546494,
0.012408758513629436,
-0.04151882231235504,
0.0023964978754520416,
0.01748095266520977,
-0.12146766483783722,
0.04126948490738869,
-0... |
4365d194-c17a-4fe9-872b-7cc5c72acd1a | Limit push down optimization could cause a error
Cannot find column
. Fixes
#30438
.
#30562
(
Nikolai Kochetov
).
Add missing parenthesis for
isNotNull
/
isNull
rewrites to
IS [NOT] NULL
(fixes queries that has something like
isNotNull(1)+isNotNull(2)
).
#30520
(
Azat Khuzhin
).
Fix deadlock on ALTER with scalar subquery to the same table, close
#30461
.
#30492
(
Vladimir C
).
Fixed segfault which might happen if session expired during execution of REPLACE PARTITION.
#30432
(
tavplubix
).
Queries with condition like
IN (subquery)
could return incorrect result in case if aggregate projection applied. Fixed creation of sets for projections.
#30310
(
Amos Bird
).
Fix column alias resolution of JOIN queries when projection is enabled. This fixes
#30146
.
#30293
(
Amos Bird
).
Fix some deficiency in
replaceRegexpAll
function.
#30292
(
Memo
).
Fix ComplexKeyHashedDictionary, ComplexKeySparseHashedDictionary parsing
preallocate
option from layout config.
#30246
(
Maksim Kita
).
Fix
[I]LIKE
function. Closes
#28661
.
#30244
(
Nikolay Degterinsky
).
Fix crash with shortcircuit and lowcardinality in multiIf.
#30243
(
RaΓΊl MarΓn
).
FlatDictionary, HashedDictionary fix bytes_allocated calculation for nullable attributes.
#30238
(
Maksim Kita
).
Allow identifiers starting with numbers in multiple joins.
#30230
(
Vladimir C
).
Fix reading from
MergeTree
with
max_read_buffer_size = 0
(when the user wants to shoot himself in the foot) (can lead to exceptions
Can't adjust last granule
,
LOGICAL_ERROR
, or even data loss).
#30192
(
Azat Khuzhin
).
Fix
pread_fake_async
/
pread_threadpool
with
min_bytes_to_use_direct_io
.
#30191
(
Azat Khuzhin
).
Fix INSERT SELECT incorrectly fills MATERIALIZED column based of Nullable column.
#30189
(
Azat Khuzhin
).
Support nullable arguments in function
initializeAggregation
.
#30177
(
Anton Popov
).
Fix error
Port is already connected
for queries with
GLOBAL IN
and
WITH TOTALS
. Only for 21.9 and 21.10.
#30086
(
Nikolai Kochetov
).
Fix race between MOVE PARTITION and merges/mutations for MergeTree.
#30074
(
Azat Khuzhin
).
Dropped
Memory
database might reappear after server restart, it's fixed (
#29795
). Also added
force_remove_data_recursively_on_drop
setting as a workaround for
Directory not empty
error when dropping
Ordinary
database (because it's not possible to remove data leftovers manually in cloud environment).
#30054
(
tavplubix
).
Fix crash of sample by
tuple()
, closes
#30004
.
#30016
(
flynn
).
try to close issue:
#29965
.
#29976
(
hexiaoting
).
Fix possible data-race between
FileChecker
and
StorageLog
/
StorageStripeLog
.
#29959
(
Azat Khuzhin
).
Fix data-race between
LogSink::writeMarks()
and
LogSource
in
StorageLog
.
#29946
(
Azat Khuzhin
). | {"source_file": "2021.md"} | [
-0.03893895819783211,
-0.010726245120167732,
0.008711814880371094,
0.010207182727754116,
-0.050551626831293106,
-0.07011454552412033,
-0.03410222381353378,
0.02819886803627014,
0.004583059344440699,
0.036697935312986374,
-0.004278251435607672,
-0.07205280661582947,
0.0013524151872843504,
-... |
3252bcd8-61ab-4b15-ad59-d7f06fd5cc2b | Fix data-race between
LogSink::writeMarks()
and
LogSource
in
StorageLog
.
#29946
(
Azat Khuzhin
).
Fix potential resource leak of the concurrent query limit of merge tree tables introduced in https://github.com/ClickHouse/ClickHouse/pull/19544.
#29879
(
Amos Bird
).
Fix system tables recreation check (fails to detect changes in enum values).
#29857
(
Azat Khuzhin
).
MaterializedMySQL: Fix an issue where if the connection to MySQL was lost, only parts of a transaction could be processed.
#29837
(
HΓ₯vard KvΓ₯len
).
Avoid
Timeout exceeded: elapsed 18446744073.709553 seconds
error that might happen in extremely rare cases, presumably due to some bug in kernel. Fixes
#29154
.
#29811
(
tavplubix
).
Fix bad cast in
ATTACH TABLE ... FROM 'path'
query when non-string literal is used instead of path. It may lead to reading of uninitialized memory.
#29790
(
alexey-milovidov
).
Fix concurrent access to
LowCardinality
during
GROUP BY
(in combination with
Buffer
tables it may lead to troubles).
#29782
(
Azat Khuzhin
).
Fix incorrect
GROUP BY
(multiple rows with the same keys in result) in case of distributed query when shards had mixed versions
<= 21.3
and
>= 21.4
,
GROUP BY
key had several columns all with fixed size, and two-level aggregation was activated (see
group_by_two_level_threshold
and
group_by_two_level_threshold_bytes
). Fixes
#29580
.
#29735
(
Nikolai Kochetov
).
Fixed incorrect behaviour of setting
materialized_postgresql_tables_list
at server restart. Found in
#28529
.
#29686
(
Kseniia Sumarokova
).
Condition in filter predicate could be lost after push-down optimisation.
#29625
(
Nikolai Kochetov
).
Fix JIT expression compilation with aliases and short-circuit expression evaluation. Closes
#29403
.
#29574
(
Maksim Kita
).
Fix rare segfault in
ALTER MODIFY
query when using incorrect table identifier in
DEFAULT
expression like
x.y.z...
Fixes
#29184
.
#29573
(
alesapin
).
Fix nullptr deference for
GROUP BY WITH TOTALS HAVING
(when the column from
HAVING
wasn't selected).
#29553
(
Azat Khuzhin
).
Avoid deadlocks when reading and writting on Join table engine tables at the same time.
#29544
(
RaΓΊl MarΓn
).
Fix bug in check
pathStartsWith
because there was bug with the usage of
std::mismatch
:
The behavior is undefined if the second range is shorter than the first range.
.
#29531
(
Kseniia Sumarokova
).
In ODBC bridge add retries for error Invalid cursor state. It is a retriable error. Closes
#29473
.
#29518
(
Kseniia Sumarokova
).
Fixed incorrect table name parsing on loading of
Lazy
database. Fixes
#29456
.
#29476
(
tavplubix
).
Fix possible
Block structure mismatch
for subqueries with pushed-down
HAVING
predicate. Fixes
#29010
.
#29475
(
Nikolai Kochetov
).
Fix Logical error
Cannot capture columns
in functions greatest/least. Closes
#29334
.
#29454
(
Kruglov Pavel
). | {"source_file": "2021.md"} | [
0.025806015357375145,
-0.017647504806518555,
-0.020410766825079918,
0.03815219923853874,
-0.011843131855130196,
-0.1141878068447113,
-0.004166380036622286,
0.02199716493487358,
0.02155194617807865,
0.1125413030385971,
0.03391309827566147,
0.027529578655958176,
0.053249794989824295,
0.00588... |
cbaa4a4a-4510-40b9-830a-c87309a0e281 | Fix Logical error
Cannot capture columns
in functions greatest/least. Closes
#29334
.
#29454
(
Kruglov Pavel
).
RocksDB table engine: fix race condition during multiple DB opening (and get back some tests that triggers the problem on CI).
#29393
(
Azat Khuzhin
).
Fix replicated access storage not shutting down cleanly when misconfigured.
#29388
(
Kevin Michel
).
Remove window function
nth_value
as it is not memory-safe. This closes
#29347
.
#29348
(
alexey-milovidov
).
Fix vertical merges of projection parts. This fixes
#29253
. This PR also fixes several projection merge/mutation issues introduced in https://github.com/ClickHouse/ClickHouse/pull/25165.
#29337
(
Amos Bird
).
Fix hanging DDL queries on Replicated database while adding a new replica.
#29328
(
Kevin Michel
).
Fix connection timeouts (
send_timeout
/
receive_timeout
).
#29282
(
Azat Khuzhin
).
Fix possible
Table columns structure in ZooKeeper is different from local table structure
exception while recreating or creating new replicas of
ReplicatedMergeTree
, when one of table columns have default expressions with case-insensitive functions.
#29266
(
Anton Popov
).
Send normal
Database doesn't exist error
(
UNKNOWN_DATABASE
) to the client (via TCP) instead of
Attempt to read after eof
(
ATTEMPT_TO_READ_AFTER_EOF
).
#29229
(
Azat Khuzhin
).
Fix segfault while inserting into column with type LowCardinality(Nullable) in Avro input format.
#29132
(
Kruglov Pavel
).
Do not allow to reuse previous credentials in case of inter-server secret (Before INSERT via Buffer/Kafka to Distributed table with interserver secret configured for that cluster, may re-use previously set user for that connection).
#29060
(
Azat Khuzhin
).
Handle
any_join_distinct_right_table_keys
when join with dictionary, close
#29007
.
#29014
(
Vladimir C
).
Fix "Not found column ... in block" error, when join on alias column, close
#26980
.
#29008
(
Vladimir C
).
Fix the number of threads used in
GLOBAL IN
subquery (it was executed in single threads since
#19414
bugfix).
#28997
(
Nikolai Kochetov
).
Fix bad optimizations of ORDER BY if it contains WITH FILL. This closes
#28908
. This closes
#26049
.
#28910
(
alexey-milovidov
).
Fix higher-order array functions (
SIGSEGV
for
arrayCompact
/
ILLEGAL_COLUMN
for
arrayDifference
/
arrayCumSumNonNegative
) with consts.
#28904
(
Azat Khuzhin
).
Fix waiting for mutation with
mutations_sync=2
.
#28889
(
Azat Khuzhin
).
Fix queries to external databases (i.e. MySQL) with multiple columns in IN ( i.e.
(k,v) IN ((1, 2))
).
#28888
(
Azat Khuzhin
).
Fix bug with
LowCardinality
in short-curcuit function evaluation. Closes
#28884
.
#28887
(
Kruglov Pavel
).
Fix reading of subcolumns from compact parts.
#28873
(
Anton Popov
).
Fixed a race condition between
DROP PART
and
REPLACE/MOVE PARTITION
that might cause replicas to diverge in rare cases.
#28864
(
tavplubix
). | {"source_file": "2021.md"} | [
0.01935974694788456,
-0.008704145438969135,
-0.013627751730382442,
-0.012240528129041195,
0.016781659796833992,
-0.08859774470329285,
-0.08412687480449677,
-0.014288323931396008,
-0.03043016605079174,
0.09353634715080261,
0.005255311261862516,
-0.0368187315762043,
0.006557095795869827,
0.0... |
0f15976e-de29-4a0c-97b1-615dd4a4720b | Fixed a race condition between
DROP PART
and
REPLACE/MOVE PARTITION
that might cause replicas to diverge in rare cases.
#28864
(
tavplubix
).
Fix expressions compilation with short circuit evaluation.
#28821
(
Azat Khuzhin
).
Fix extremely rare case when ReplicatedMergeTree replicas can diverge after hard reboot of all replicas. The error looks like
Part ... intersects (previous|next) part ...
.
#28817
(
alesapin
).
Better check for connection usability and also catch any exception in
RabbitMQ
shutdown just in case.
#28797
(
Kseniia Sumarokova
).
Fix benign race condition in ReplicatedMergeTreeQueue. Shouldn't be visible for user, but can lead to subtle bugs.
#28734
(
alesapin
).
Fix possible crash for
SELECT
with partially created aggregate projection in case of exception.
#28700
(
Amos Bird
).
Fix the coredump in the creation of distributed tables, when the parameters passed in are wrong.
#28686
(
Zhiyong Wang
).
Add Settings.Names, Settings.Values aliases for system.processes table.
#28685
(
Vitaly
).
Support for S2 Geometry library: Fix the number of arguments required by
s2RectAdd
and
s2RectContains
functions.
#28663
(
Bharat Nallan
).
Fix invalid constant type conversion when Nullable or LowCardinality primary key is used.
#28636
(
Amos Bird
).
Fix "Column is not under aggregate function and not in GROUP BY" with PREWHERE (Fixes:
#28461
).
#28502
(
Azat Khuzhin
).
ClickHouse release v21.10, 2021-10-16 {#clickhouse-release-v2110-2021-10-16}
Backward Incompatible Change {#backward-incompatible-change-2}
Now the following MergeTree table-level settings:
replicated_max_parallel_sends
,
replicated_max_parallel_sends_for_table
,
replicated_max_parallel_fetches
,
replicated_max_parallel_fetches_for_table
do nothing. They never worked well and were replaced with
max_replicated_fetches_network_bandwidth
,
max_replicated_sends_network_bandwidth
and
background_fetches_pool_size
.
#28404
(
alesapin
).
New Feature {#new-feature-2}
Add feature for creating user-defined functions (UDF) as lambda expressions. Syntax
CREATE FUNCTION {function_name} as ({parameters}) -> {function core}
. Example
CREATE FUNCTION plus_one as (a) -> a + 1
. Authors @Realist007.
#27796
(
Maksim Kita
)
#23978
(
Realist007
).
Added
Executable
storage engine and
executable
table function. It enables data processing with external scripts in streaming fashion.
#28102
(
Maksim Kita
) (
ruct
).
Added
ExecutablePool
storage engine. Similar to
Executable
but it's using a pool of long running processes.
#28518
(
Maksim Kita
).
Add
ALTER TABLE ... MATERIALIZE COLUMN
query.
#27038
(
Vladimir Chebotarev
).
Support for partitioned write into
s3
table function.
#23051
(
Vladimir Chebotarev
).
Support
lz4
compression format (in addition to
gz
,
bz2
,
xz
,
zstd
) for data import / export.
#25310
(
Bharat Nallan
). | {"source_file": "2021.md"} | [
-0.012498578988015652,
-0.02671162225306034,
0.08171837776899338,
0.009994772262871265,
-0.02267223224043846,
-0.0460929349064827,
-0.021313467994332314,
0.019179604947566986,
0.0034743929281830788,
0.09257686138153076,
-0.008867116644978523,
-0.020653633400797844,
0.053409986197948456,
0.... |
cd5f243b-59c6-4860-a915-7ee39935b296 | Support
lz4
compression format (in addition to
gz
,
bz2
,
xz
,
zstd
) for data import / export.
#25310
(
Bharat Nallan
).
Allow positional arguments under setting
enable_positional_arguments
. Closes
#2592
.
#27530
(
Kseniia Sumarokova
).
Accept user settings related to file formats in
SETTINGS
clause in
CREATE
query for s3 tables. This closes
#27580
.
#28037
(
Nikita Mikhaylov
).
Allow SSL connection for
RabbitMQ
engine.
#28365
(
Kseniia Sumarokova
).
Add
getServerPort
function to allow getting server port. When the port is not used by the server, throw an exception.
#27900
(
Amos Bird
).
Add conversion functions between "snowflake id" and
DateTime
,
DateTime64
. See
#27058
.
#27704
(
jasine
).
Add function
SHA512
.
#27830
(
zhanglistar
).
Add
log_queries_probability
setting that allows user to write to query_log only a sample of queries. Closes
#16609
.
#27527
(
Nikolay Degterinsky
).
Experimental Feature {#experimental-feature-2}
web
type of disks to store readonly tables on web server in form of static files. See
#23982
.
#25251
(
Kseniia Sumarokova
). This is mostly needed to faciliate testing of operation on shared storage and for easy importing of datasets. Not recommended to use before release 21.11.
Added new commands
BACKUP
and
RESTORE
.
#21945
(
Vitaly Baranov
). This is under development and not intended to be used in current version.
Performance Improvement {#performance-improvement-2}
Speed up
sumIf
and
countIf
aggregation functions.
#28272
(
RaΓΊl MarΓn
).
Create virtual projection for
minmax
indices. Now, when
allow_experimental_projection_optimization
is enabled, queries will use minmax index instead of reading the data when possible.
#26286
(
Amos Bird
).
Introducing two checks in
sequenceMatch
and
sequenceCount
that allow for early exit when some deterministic part of the sequence pattern is missing from the events list. This change unlocks many queries that would previously fail due to reaching operations cap, and generally speeds up the pipeline.
#27729
(
Jakub Kuklis
).
Enhance primary key analysis with always monotonic information of binary functions, notably non-zero constant division.
#28302
(
Amos Bird
).
Make
hasAll
filter condition leverage bloom filter data-skipping indexes.
#27984
(
Braulio Valdivielso MartΓnez
).
Speed up data parts loading by delaying table startup process.
#28313
(
Amos Bird
).
Fixed possible excessive number of conditions moved from
WHERE
to
PREWHERE
(optimization controlled by settings
optimize_move_to_prewhere
).
#28139
(
lthaooo
).
Enable
optimize_distributed_group_by_sharding_key
by default.
#28105
(
Azat Khuzhin
).
Improvement {#improvement-2}
Check cluster name before creating
Distributed
table, do not allow to create a table with incorrect cluster name. Fixes
#27832
.
#27927
(
tavplubix
). | {"source_file": "2021.md"} | [
0.048850346356630325,
0.0020404832903295755,
-0.06546676158905029,
0.054706424474716187,
-0.013852331787347794,
-0.02724207378923893,
-0.043510790914297104,
-0.01248155627399683,
-0.03224561735987663,
0.04674376919865608,
-0.0784003883600235,
0.017320318147540092,
0.03976396098732948,
0.01... |
e4930d73-6b68-4624-966e-4e6fd20a1b60 | Improvement {#improvement-2}
Check cluster name before creating
Distributed
table, do not allow to create a table with incorrect cluster name. Fixes
#27832
.
#27927
(
tavplubix
).
Add aggregate function
quantileBFloat16Weighted
similarly to other quantile...Weighted functions. This closes
#27745
.
#27758
(
Ivan Novitskiy
).
Allow to create dictionaries with empty attributes list.
#27905
(
Maksim Kita
).
Add interactive documentation in
clickhouse-client
about how to reset the password. This is useful in scenario when user has installed ClickHouse, set up the password and instantly forget it. See
#27750
.
#27903
(
alexey-milovidov
).
Support the case when the data is enclosed in array in
JSONAsString
input format. Closes
#25517
.
#25633
(
Kruglov Pavel
).
Add new column
last_queue_update_exception
to
system.replicas
table.
#26843
(
nvartolomei
).
Support reconnections on failover for
MaterializedPostgreSQL
tables. Closes
#28529
.
#28614
(
Kseniia Sumarokova
).
Generate a unique server UUID on first server start.
#20089
(
Bharat Nallan
).
Introduce
connection_wait_timeout
(default to 5 seconds, 0 - do not wait) setting for
MySQL
engine.
#28474
(
Azat Khuzhin
).
Do not allow creating
MaterializedPostgreSQL
with bad arguments. Closes
#28423
.
#28430
(
Kseniia Sumarokova
).
Use real tmp file instead of predefined "rows_sources" for vertical merges. This avoids generating garbage directories in tmp disks.
#28299
(
Amos Bird
).
Added
libhdfs3_conf
in server config instead of export env
LIBHDFS3_CONF
in clickhouse-server.service. This is for configuration of interaction with HDFS.
#28268
(
Zhichang Yu
).
Fix removing of parts in a Temporary state which can lead to an unexpected exception (
Part %name% doesn't exist
). Fixes
#23661
.
#28221
#28221
) (
Azat Khuzhin
).
Fix
zookeeper_log.address
(before the first patch in this PR the address was always
::
) and reduce number of calls
getpeername(2)
for this column (since each time entry for
zookeeper_log
is added
getpeername()
is called, cache this address in the zookeeper client to avoid this).
#28212
(
Azat Khuzhin
).
Support implicit conversions between index in operator
[]
and key of type
Map
(e.g. different
Int
types,
String
and
FixedString
).
#28096
(
Anton Popov
).
Support
ON CONFLICT
clause when inserting into PostgreSQL table engine or table function. Closes
#27727
.
#28081
(
Kseniia Sumarokova
).
Lower restrictions for
Enum
data type to allow attaching compatible data. Closes
#26672
.
#28028
(
Dmitry Novik
).
Add a setting
empty_result_for_aggregation_by_constant_keys_on_empty_set
to control the behavior of grouping by constant keys on empty set. This is to bring back the old behavior of
#6842
.
#27932
(
Amos Bird
). | {"source_file": "2021.md"} | [
-0.03851718455553055,
-0.047117553651332855,
-0.091639943420887,
0.0033094603568315506,
-0.12200192362070084,
-0.06752289831638336,
-0.009943378157913685,
-0.03886125609278679,
-0.004595689009875059,
0.06594398617744446,
0.06126033142209053,
-0.02639986202120781,
0.07221043854951859,
-0.06... |
4e2d1ee0-07ed-427a-a90c-43ff2b3227dd | Added
replication_wait_for_inactive_replica_timeout
setting. It allows to specify how long to wait for inactive replicas to execute
ALTER
/
OPTIMZE
/
TRUNCATE
query (default is 120 seconds). If
replication_alter_partitions_sync
is 2 and some replicas are not active for more than
replication_wait_for_inactive_replica_timeout
seconds, then
UNFINISHED
will be thrown.
#27931
(
tavplubix
).
Support lambda argument for
APPLY
column transformer which allows applying functions with more than one argument. This is for
#27877
.
#27901
(
Amos Bird
).
Enable
tcp_keep_alive_timeout
by default.
#27882
(
Azat Khuzhin
).
Improve remote query cancelation (in case of remote server abnormaly terminated).
#27881
(
Azat Khuzhin
).
Use Multipart copy upload for large S3 objects.
#27858
(
ianton-ru
).
Allow symlink traversal for library dictionaty path.
#27815
(
Kseniia Sumarokova
).
Now
ALTER MODIFY COLUM
T
to
Nullable(T)
doesn't require mutation.
#27787
(
victorgao
).
Don't silently ignore errors and don't count delays in
ReadBufferFromS3
.
#27484
(
Vladimir Chebotarev
).
Improve
ALTER ... MATERIALIZE TTL
by recalculating metadata only without actual TTL action.
#27019
(
lthaooo
).
Allow reading the list of custom top level domains without a new line at EOF.
#28213
(
Azat Khuzhin
).
Bug Fix {#bug-fix-1}
Fix cases, when reading compressed data from
carbon-clickhouse
fails with 'attempt to read after end of file'. Closes
#26149
.
#28150
(
FArthur-cmd
).
Fix checking access grants when executing
GRANT WITH REPLACE
statement with
ON CLUSTER
clause. This PR improves fix
#27001
.
#27983
(
Vitaly Baranov
).
Allow selecting with
extremes = 1
from a column of the type
LowCardinality(UUID)
.
#27918
(
Vitaly Baranov
).
Fix PostgreSQL-style cast (
::
operator) with negative numbers.
#27876
(
Anton Popov
).
After
#26864
. Fix shutdown of
NamedSessionStorage
: session contexts stored in
NamedSessionStorage
are now destroyed before destroying the global context.
#27875
(
Vitaly Baranov
).
Bugfix for
windowFunnel
"strict" mode. This fixes
#27469
.
#27563
(
achimbab
).
Fix infinite loop while reading truncated
bzip2
archive.
#28543
(
Azat Khuzhin
).
Fix UUID overlap in
DROP TABLE
for internal DDL from
MaterializedMySQL
. MaterializedMySQL is an experimental feature.
#28533
(
Azat Khuzhin
).
Fix
There is no subcolumn
error, while select from tables, which have
Nested
columns and scalar columns with dot in name and the same prefix as
Nested
(e.g.
n.id UInt32, n.arr1 Array(UInt64), n.arr2 Array(UInt64)
).
#28531
(
Anton Popov
).
Fix bug which can lead to error
Existing table metadata in ZooKeeper differs in sorting key expression.
after ALTER of
ReplicatedVersionedCollapsingMergeTree
. Fixes
#28515
.
#28528
(
alesapin
). | {"source_file": "2021.md"} | [
-0.039087925106287,
-0.0035977098159492016,
-0.06690844893455505,
-0.0010905548697337508,
0.02101133018732071,
-0.07043083757162094,
-0.06519534438848495,
-0.04034019634127617,
-0.008526456542313099,
0.08596161007881165,
0.006986583117395639,
0.031708378344774246,
0.018940338864922523,
-0.... |
91e287a2-7b36-4a65-ba39-b9127a8a3d36 | Fixed possible ZooKeeper watches leak (minor issue) on background processing of distributed DDL queue. Closes
#26036
.
#28446
(
tavplubix
).
Fix missing quoting of table names in
MaterializedPostgreSQL
engine. Closes
#28316
.
#28433
(
Kseniia Sumarokova
).
Fix the wrong behaviour of non joined rows from nullable column. Close
#27691
.
#28349
(
vdimir
).
Fix NOT-IN index optimization when not all key columns are used. This fixes
#28120
.
#28315
(
Amos Bird
).
Fix intersecting parts due to new part had been replaced with an empty part.
#28310
(
Azat Khuzhin
).
Fix inconsistent result in queries with
ORDER BY
and
Merge
tables with enabled setting
optimize_read_in_order
.
#28266
(
Anton Popov
).
Fix possible read of uninitialized memory for queries with
Nullable(LowCardinality)
type and the setting
extremes
set to 1. Fixes
#28165
.
#28205
(
Nikolai Kochetov
).
Multiple small fixes for projections. See detailed description in the PR.
#28178
(
Amos Bird
).
Fix extremely rare segfaults on shutdown due to incorrect order of context/config reloader shutdown.
#28088
(
nvartolomei
).
Fix handling null value with type of
Nullable(String)
in function
JSONExtract
. This fixes
#27929
and
#27930
. This was introduced in https://github.com/ClickHouse/ClickHouse/pull/25452 .
#27939
(
Amos Bird
).
Multiple fixes for the new
clickhouse-keeper
tool. Fix a rare bug in
clickhouse-keeper
when the client can receive a watch response before request-response.
#28197
(
alesapin
). Fix incorrect behavior in
clickhouse-keeper
when list watches (
getChildren
) triggered with
set
requests for children.
#28190
(
alesapin
). Fix rare case when changes of
clickhouse-keeper
settings may lead to lost logs and server hung.
#28360
(
alesapin
). Fix bug in
clickhouse-keeper
which can lead to endless logs when
rotate_logs_interval
decreased.
#28152
(
alesapin
).
Build/Testing/Packaging Improvement {#buildtestingpackaging-improvement-2}
Enable Thread Fuzzer in Stress Test. Thread Fuzzer is ClickHouse feature that allows to test more permutations of thread scheduling and discover more potential issues. This closes
#9813
. This closes
#9814
. This closes
#9515
. This closes
#9516
.
#27538
(
alexey-milovidov
).
Add new log level
test
for testing environments. It is even more verbose than the default
trace
.
#28559
(
alesapin
).
Print out git status information at CMake configure stage.
#28047
(
Braulio Valdivielso MartΓnez
).
Temporarily switched ubuntu apt repository to mirror ru.archive.ubuntu.com as the default one (archive.ubuntu.com) is not responding from our CI.
#28016
(
Ilya Yatsishin
).
ClickHouse release v21.9, 2021-09-09 {#clickhouse-release-v219-2021-09-09}
Backward Incompatible Change {#backward-incompatible-change-3} | {"source_file": "2021.md"} | [
-0.015327702276408672,
-0.007162680849432945,
0.024531656876206398,
0.05469926819205284,
0.00515231816098094,
-0.08955109864473343,
-0.032684121280908585,
-0.03967038542032242,
-0.002529960824176669,
0.05906440317630768,
-0.01847759075462818,
-0.0016079437918961048,
-0.030672570690512657,
... |
eef3bcee-543d-42cf-b866-336278d661fa | ClickHouse release v21.9, 2021-09-09 {#clickhouse-release-v219-2021-09-09}
Backward Incompatible Change {#backward-incompatible-change-3}
Do not output trailing zeros in text representation of
Decimal
types. Example:
1.23
will be printed instead of
1.230000
for decimal with scale 6. This closes
#15794
. It may introduce slight incompatibility if your applications somehow relied on the trailing zeros. Serialization in output formats can be controlled with the setting
output_format_decimal_trailing_zeros
. Implementation of
toString
and casting to String is changed unconditionally.
#27680
(
alexey-milovidov
).
Do not allow to apply parametric aggregate function with
-Merge
combinator to aggregate function state if state was produced by aggregate function with different parameters. For example, state of
fooState(42)(x)
cannot be finalized with
fooMerge(s)
or
fooMerge(123)(s)
, parameters must be specified explicitly like
fooMerge(42)(s)
and must be equal. It does not affect some special aggregate functions like
quantile
and
sequence*
that use parameters for finalization only.
#26847
(
tavplubix
).
Under clickhouse-local, always treat local addresses with a port as remote.
#26736
(
RaΓΊl MarΓn
).
Fix the issue that in case of some sophisticated query with column aliases identical to the names of expressions, bad cast may happen. This fixes
#25447
. This fixes
#26914
. This fix may introduce backward incompatibility: if there are different expressions with identical names, exception will be thrown. It may break some rare cases when
enable_optimize_predicate_expression
is set.
#26639
(
alexey-milovidov
).
Now, scalar subquery always returns
Nullable
result if it's type can be
Nullable
. It is needed because in case of empty subquery it's result should be
Null
. Previously, it was possible to get error about incompatible types (type deduction does not execute scalar subquery, and it could use not-nullable type). Scalar subquery with empty result which can't be converted to
Nullable
(like
Array
or
Tuple
) now throws error. Fixes
#25411
.
#26423
(
Nikolai Kochetov
).
Introduce syntax for here documents. Example
SELECT $doc$ VALUE $doc$
.
#26671
(
Maksim Kita
). This change is backward incompatible if in query there are identifiers that contain
$
#28768
. | {"source_file": "2021.md"} | [
-0.05392651632428169,
-0.010366011410951614,
-0.016994651407003403,
-0.0555681549012661,
-0.12218950688838959,
0.019366810098290443,
-0.014966781251132488,
0.06459786742925644,
-0.006088114343583584,
0.012113229371607304,
0.02327372506260872,
-0.0606711246073246,
-0.0050315288826823235,
0.... |
4e154016-1934-4592-931e-7a5bf771f05e | Introduce syntax for here documents. Example
SELECT $doc$ VALUE $doc$
.
#26671
(
Maksim Kita
). This change is backward incompatible if in query there are identifiers that contain
$
#28768
.
Now indices can handle Nullable types, including
isNull
and
isNotNull
.
#12433
and
#12455
(
Amos Bird
) and
#27250
(
Azat Khuzhin
). But this was done with on-disk format changes, and even though new server can read old data, old server cannot. Also, in case you have
MINMAX
data skipping indices, you may get
Data after mutation/merge is not byte-identical
error, since new index will have
.idx2
extension while before it was
.idx
. That said, that you should not delay updating all existing replicas, in this case, otherwise, if old replica (<21.9) will download data from new replica with 21.9+ it will not be able to apply index for downloaded part.
New Feature {#new-feature-3}
Implementation of short circuit function evaluation, closes
#12587
. Add settings
short_circuit_function_evaluation
to configure short circuit function evaluation.
#23367
(
Kruglov Pavel
).
Add support for INTERSECT, EXCEPT, ANY, ALL operators.
#24757
(
Kirill Ershov
). (
Kseniia Sumarokova
).
Add support for encryption at the virtual file system level (data encryption at rest) using AES-CTR algorithm.
#24206
(
Latysheva Alexandra
). (
Vitaly Baranov
)
#26733
#26377
#26465
.
Added natural language processing (NLP) functions for tokenization, stemming, lemmatizing and search in synonyms extensions.
#24997
(
Nikolay Degterinsky
).
Added integration with S2 geometry library.
#24980
(
Andr0901
). (
Nikita Mikhaylov
).
Add SQLite table engine, table function, database engine.
#24194
(
Arslan Gumerov
). (
Kseniia Sumarokova
).
Added support for custom query for
MySQL
,
PostgreSQL
,
ClickHouse
,
JDBC
,
Cassandra
dictionary source. Closes
#1270
.
#26995
(
Maksim Kita
).
Add shared (replicated) storage of user, roles, row policies, quotas and settings profiles through ZooKeeper.
#27426
(
Kevin Michel
).
Add compression for
INTO OUTFILE
that automatically choose compression algorithm. Closes
#3473
.
#27134
(
Filatenkov Artur
).
Add
INSERT ... FROM INFILE
similarly to
SELECT ... INTO OUTFILE
.
#27655
(
Filatenkov Artur
).
Added
complex_key_range_hashed
dictionary. Closes
#22029
.
#27629
(
Maksim Kita
).
Support expressions in JOIN ON section. Close
#21868
.
#24420
(
Vladimir C
).
When client connects to server, it receives information about all warnings that are already were collected by server. (It can be disabled by using option
--no-warnings
). Add
system.warnings
table to collect warnings about server configuration.
#26246
(
Filatenkov Artur
).
#26282
(
Filatenkov Artur
).
Allow using constant expressions from with and select in aggregate function parameters. Close
#10945
.
#27531
(
abel-cheng
). | {"source_file": "2021.md"} | [
-0.018340954557061195,
0.08612701296806335,
0.021745918318629265,
0.04916476085782051,
0.04898758977651596,
-0.01387637946754694,
-0.02075103297829628,
0.006542801856994629,
0.0358944870531559,
0.0685167908668518,
0.05922432243824005,
0.0012236650800332427,
0.026478897780179977,
-0.0890198... |
e0559e77-d60d-420f-9cff-af2352eb8200 | Allow using constant expressions from with and select in aggregate function parameters. Close
#10945
.
#27531
(
abel-cheng
).
Add
tupleToNameValuePairs
, a function that turns a named tuple into an array of pairs.
#27505
(
Braulio Valdivielso MartΓnez
).
Add support for
bzip2
compression method for import/export. Closes
#22428
.
#27377
(
Nikolay Degterinsky
).
Added
bitmapSubsetOffsetLimit(bitmap, offset, cardinality_limit)
function. It creates a subset of bitmap limit the results to
cardinality_limit
with offset of
offset
.
#27234
(
DHBin
).
Add column
default_database
to
system.users
.
#27054
(
kevin wan
).
Supported
cluster
macros inside table functions 'cluster' and 'clusterAllReplicas'.
#26913
(
polyprogrammist
).
Add new functions
currentRoles()
,
enabledRoles()
,
defaultRoles()
.
#26780
(
Vitaly Baranov
).
New functions
currentProfiles()
,
enabledProfiles()
,
defaultProfiles()
.
#26714
(
Vitaly Baranov
).
Add functions that return (initial_)query_id of the current query. This closes
#23682
.
#26410
(
Alexey Boykov
).
Add
REPLACE GRANT
feature.
#26384
(
Caspian
).
EXPLAIN
query now has
EXPLAIN ESTIMATE ...
mode that will show information about read rows, marks and parts from MergeTree tables. Closes
#23941
.
#26131
(
fastio
).
Added
system.zookeeper_log
table. All actions of ZooKeeper client are logged into this table. Implements
#25449
.
#26129
(
tavplubix
).
Zero-copy replication for
ReplicatedMergeTree
over
HDFS
storage.
#25918
(
Zhichang Yu
).
Allow to insert Nested type as array of structs in
Arrow
,
ORC
and
Parquet
input format.
#25902
(
Kruglov Pavel
).
Add a new datatype
Date32
(store data as Int32), support date range same with
DateTime64
support load parquet date32 to ClickHouse
Date32
Add new function
toDate32
like
toDate
.
#25774
(
LiuNeng
).
Allow setting default database for users.
#25268
.
#25687
(
kevin wan
).
Add an optional parameter to
MongoDB
engine to accept connection string options and support SSL connection. Closes
#21189
. Closes
#21041
.
#22045
(
Omar Bazaraa
).
Experimental Feature {#experimental-feature-3}
Added a compression codec
AES_128_GCM_SIV
which encrypts columns instead of compressing them.
#19896
(
PHO
). Will be rewritten, do not use.
Rename
MaterializeMySQL
to
MaterializedMySQL
.
#26822
(
tavplubix
).
Performance Improvement {#performance-improvement-3}
Improve the performance of fast queries when
max_execution_time = 0
by reducing the number of
clock_gettime
system calls.
#27325
(
filimonov
).
Specialize date time related comparison to achieve better performance. This fixes
#27083
.
#27122
(
Amos Bird
). | {"source_file": "2021.md"} | [
0.022305579856038094,
-0.029122373089194298,
-0.0924229547381401,
0.03865865617990494,
-0.003343696240335703,
0.042540293186903,
0.02457970194518566,
0.0004883421934209764,
-0.11372680962085724,
0.015964994207024574,
-0.003477448830381036,
-0.06613477319478989,
0.09419598430395126,
-0.0654... |
dc608630-c993-4d3b-8202-945df5e1f22b | Specialize date time related comparison to achieve better performance. This fixes
#27083
.
#27122
(
Amos Bird
).
Share file descriptors in concurrent reads of the same files. There is no noticeable performance difference on Linux. But the number of opened files will be significantly (10..100 times) lower on typical servers and it makes operations easier. See
#26214
.
#26768
(
alexey-milovidov
).
Improve latency of short queries, that require reading from tables with large number of columns.
#26371
(
Anton Popov
).
Don't build sets for indices when analyzing a query.
#26365
(
RaΓΊl MarΓn
).
Vectorize the SUM of Nullable integer types with native representation (
David Manzanares
,
RaΓΊl MarΓn
).
#26248
(
RaΓΊl MarΓn
).
Compile expressions involving columns with
Enum
types.
#26237
(
Maksim Kita
).
Compile aggregate functions
groupBitOr
,
groupBitAnd
,
groupBitXor
.
#26161
(
Maksim Kita
).
Improved memory usage with better block size prediction when reading empty DEFAULT columns. Closes
#17317
.
#25917
(
Vladimir Chebotarev
).
Reduce memory usage and number of read rows in queries with
ORDER BY primary_key
.
#25721
(
Anton Popov
).
Enable
distributed_push_down_limit
by default.
#27104
(
Azat Khuzhin
).
Make
toTimeZone
monotonicity when timeZone is a constant value to support partition puring when use sql like:.
#26261
(
huangzhaowei
).
Improvement {#improvement-3}
Mark window functions as ready for general use. Remove the
allow_experimental_window_functions
setting.
#27184
(
Alexander Kuzmenkov
).
Improve compatibility with non-whole-minute timezone offsets.
#27080
(
RaΓΊl MarΓn
).
If file descriptor in
File
table is regular file - allow to read multiple times from it. It allows
clickhouse-local
to read multiple times from stdin (with multiple SELECT queries or subqueries) if stdin is a regular file like
clickhouse-local --query "SELECT * FROM table UNION ALL SELECT * FROM table" ... < file
. This closes
#11124
. Co-authored with (
alexey-milovidov
).
#25960
(
BoloniniD
).
Remove duplicate index analysis and avoid possible invalid limit checks during projection analysis.
#27742
(
Amos Bird
).
Enable query parameters to be passed in the body of HTTP requests.
#27706
(
Hermano Lustosa
).
Disallow
arrayJoin
on partition expressions.
#27648
(
RaΓΊl MarΓn
).
Log client IP address if authentication fails.
#27514
(
Misko Lee
).
Use bytes instead of strings for binary data in the GRPC protocol.
#27431
(
Vitaly Baranov
).
Send response with error message if HTTP port is not set and user tries to send HTTP request to TCP port.
#27385
(
Braulio Valdivielso MartΓnez
).
Add
_CAST
function for internal usage, which will not preserve type nullability, but non-internal cast will preserve according to setting
cast_keep_nullable
. Closes
#12636
.
#27382
(
Kseniia Sumarokova
). | {"source_file": "2021.md"} | [
0.00822509080171585,
0.036348722875118256,
-0.09825171530246735,
0.056711625307798386,
0.012544827535748482,
-0.050265636295080185,
-0.04237184301018715,
0.010617366060614586,
-0.015091078355908394,
0.011139150708913803,
-0.057806164026260376,
0.0036985885817557573,
-0.07184828072786331,
-... |
2c6b12e3-4141-43d3-9276-2028a3f51e6a | Add setting
log_formatted_queries
to log additional formatted query into
system.query_log
. It's useful for normalized query analysis because functions like
normalizeQuery
and
normalizeQueryKeepNames
don't parse/format queries in order to achieve better performance.
#27380
(
Amos Bird
).
Add two settings
max_hyperscan_regexp_length
and
max_hyperscan_regexp_total_length
to prevent huge regexp being used in hyperscan related functions, such as
multiMatchAny
.
#27378
(
Amos Bird
).
Memory consumed by bitmap aggregate functions now is taken into account for memory limits. This closes
#26555
.
#27252
(
alexey-milovidov
).
Add 10 seconds cache for S3 proxy resolver.
#27216
(
ianton-ru
).
Split global mutex into individual regexp construction. This helps avoid huge regexp construction blocking other related threads.
#27211
(
Amos Bird
).
Support schema for PostgreSQL database engine. Closes
#27166
.
#27198
(
Kseniia Sumarokova
).
Track memory usage in clickhouse-client.
#27191
(
Filatenkov Artur
).
Try recording
query_kind
in
system.query_log
even when query fails to start.
#27182
(
Amos Bird
).
Added columns
replica_is_active
that maps replica name to is replica active status to table
system.replicas
. Closes
#27138
.
#27180
(
Maksim Kita
).
Allow to pass query settings via server URI in Web UI.
#27177
(
kolsys
).
Add a new metric called
MaxPushedDDLEntryID
which is the maximum ddl entry id that current node push to zookeeper.
#27174
(
Fuwang Hu
).
Improved the existence condition judgment and empty string node judgment when
clickhouse-keeper
creates znode.
#27125
(
ε°θ·―
).
Merge JOIN correctly handles empty set in the right.
#27078
(
Vladimir C
).
Now functions can be shard-level constants, which means if it's executed in the context of some distributed table, it generates a normal column, otherwise it produces a constant value. Notable functions are:
hostName()
,
tcpPort()
,
version()
,
buildId()
,
uptime()
, etc.
#27020
(
Amos Bird
).
Updated
extractAllGroupsHorizontal
- upper limit on the number of matches per row can be set via optional third argument.
#26961
(
Vasily Nemkov
).
Expose
RocksDB
statistics via system.rocksdb table. Read rocksdb options from ClickHouse config (
rocksdb...
keys). NOTE: ClickHouse does not rely on RocksDB, it is just one of the additional integration storage engines.
#26821
(
Azat Khuzhin
).
Less verbose internal RocksDB logs. NOTE: ClickHouse does not rely on RocksDB, it is just one of the additional integration storage engines. This closes
#26252
.
#26789
(
alexey-milovidov
).
Changing default roles affects new sessions only.
#26759
(
Vitaly Baranov
).
Watchdog is disabled in docker by default. Fix for not handling ctrl+c.
#26757
(
Mikhail f. Shiryaev
).
SET PROFILE
now applies constraints too if they're set for a passed profile.
#26730
(
Vitaly Baranov
). | {"source_file": "2021.md"} | [
0.07427357137203217,
-0.01309831254184246,
-0.05691472813487053,
0.0361267551779747,
-0.06768974661827087,
-0.06785037368535995,
0.027205131947994232,
-0.046145156025886536,
-0.02388162352144718,
0.04513789713382721,
-0.08644478768110275,
0.03830655291676521,
0.017839014530181885,
-0.05341... |
39fb18a2-b2c3-4b1f-9ed2-2a02e520aa5d | SET PROFILE
now applies constraints too if they're set for a passed profile.
#26730
(
Vitaly Baranov
).
Improve handling of
KILL QUERY
requests.
#26675
(
RaΓΊl MarΓn
).
mapPopulatesSeries
function supports
Map
type.
#26663
(
Ildus Kurbangaliev
).
Fix excessive (x2) connect attempts with
skip_unavailable_shards
.
#26658
(
Azat Khuzhin
).
Avoid hanging
clickhouse-benchmark
if connection fails (i.e. on EMFILE).
#26656
(
Azat Khuzhin
).
Allow more threads to be used by the Kafka engine.
#26642
(
feihengye
).
Add round-robin support for
clickhouse-benchmark
(it does not differ from the regular multi host/port run except for statistics report).
#26607
(
Azat Khuzhin
).
Executable dictionaries (
executable
,
executable_pool
) enable creation with DDL query using
clickhouse-local
. Closes
#22355
.
#26510
(
Maksim Kita
).
Set client query kind for
mysql
and
postgresql
compatibility protocol handlers.
#26498
(
anneji-dev
).
Apply
LIMIT
on the shards for queries like
SELECT * FROM dist ORDER BY key LIMIT 10
w/
distributed_push_down_limit=1
. Avoid running
Distinct
/
LIMIT BY
steps for queries like
SELECT DISTINCT shading_key FROM dist ORDER BY key
. Now
distributed_push_down_limit
is respected by
optimize_distributed_group_by_sharding_key
optimization.
#26466
(
Azat Khuzhin
).
Updated protobuf to 3.17.3. Changelogs are available on https://github.com/protocolbuffers/protobuf/releases.
#26424
(
Ilya Yatsishin
).
Enable
use_hedged_requests
setting that allows to mitigate tail latencies on large clusters.
#26380
(
alexey-milovidov
).
Improve behaviour with non-existing host in user allowed host list.
#26368
(
ianton-ru
).
Add ability to set
Distributed
directory monitor settings via CREATE TABLE (i.e.
CREATE TABLE dist (key Int) Engine=Distributed(cluster, db, table) SETTINGS monitor_batch_inserts=1
and similar).
#26336
(
Azat Khuzhin
).
Save server address in history URLs in web UI if it differs from the origin of web UI. This closes
#26044
.
#26322
(
alexey-milovidov
).
Add events to profile calls to
sleep
/
sleepEachRow
.
#26320
(
RaΓΊl MarΓn
).
Allow to reuse connections of shards among different clusters. It also avoids creating new connections when using
cluster
table function.
#26318
(
Amos Bird
).
Control the execution period of clear old temporary directories by parameter with default value.
#26212
.
#26313
(
fastio
).
Add a setting
function_range_max_elements_in_block
to tune the safety threshold for data volume generated by function
range
. This closes
#26303
.
#26305
(
alexey-milovidov
).
Check hash function at table creation, not at sampling. Add settings for MergeTree, if someone create a table with incorrect sampling column but sampling never be used, disable this settings for starting the server without exception.
#26256
(
zhaoyu
). | {"source_file": "2021.md"} | [
0.02406245470046997,
-0.025757405906915665,
-0.07953430712223053,
-0.02860059216618538,
-0.06581621617078781,
-0.06254909187555313,
-0.04793229326605797,
-0.004200261551886797,
-0.05827547609806061,
0.04463830962777138,
-0.03703665733337402,
-0.06908538192510605,
0.022446628659963608,
-0.0... |
4dfe2100-fe65-4525-976e-cdb4370229c1 | Added
output_format_avro_string_column_pattern
setting to put specified String columns to Avro as string instead of default bytes. Implements
#22414
.
#26245
(
Ilya Golshtein
).
Add information about column sizes in
system.columns
table for
Log
and
TinyLog
tables. This closes
#9001
.
#26241
(
Nikolay Degterinsky
).
Don't throw exception when querying
system.detached_parts
table if there is custom disk configuration and
detached
directory does not exist on some disks. This closes
#26078
.
#26236
(
alexey-milovidov
).
Check for non-deterministic functions in keys, including constant expressions like
now()
,
today()
. This closes
#25875
. This closes
#11333
.
#26235
(
alexey-milovidov
).
convert timestamp and timestamptz data types to
DateTime64
in PostgreSQL table engine.
#26234
(
jasine
).
Apply aggressive IN index analysis for projections so that better projection candidate can be selected.
#26218
(
Amos Bird
).
Remove GLOBAL keyword for IN when scalar function is passed. In previous versions, if user specified
GLOBAL IN f(x)
exception was thrown.
#26217
(
Amos Bird
).
Add error id (like
BAD_ARGUMENTS
) to exception messages. This closes
#25862
.
#26172
(
alexey-milovidov
).
Fix incorrect output with --progress option for clickhouse-local. Progress bar will be cleared once it gets to 100% - same as it is done for clickhouse-client. Closes
#17484
.
#26128
(
Kseniia Sumarokova
).
Add
merge_selecting_sleep_ms
setting.
#26120
(
lthaooo
).
Remove complicated usage of Linux AIO with one block readahead and replace it with plain simple synchronous IO with O_DIRECT. In previous versions, the setting
min_bytes_to_use_direct_io
may not work correctly if
max_threads
is greater than one. Reading with direct IO (that is disabled by default for queries and enabled by default for large merges) will work in less efficient way. This closes
#25997
.
#26003
(
alexey-milovidov
).
Flush
Distributed
table on
REPLACE TABLE
query. Resolves
#24566
- Do not replace (or create) table on
[CREATE OR] REPLACE TABLE ... AS SELECT
query if insertion into new table fails. Resolves
#23175
.
#25895
(
tavplubix
).
Add
views
column to system.query_log containing the names of the (materialized or live) views executed by the query. Adds a new log table (
system.query_views_log
) that contains information about each view executed during a query. Modifies view execution: When an exception is thrown while executing a view, any view that has already startedwill continue running until it finishes. This used to be the behaviour under parallel_view_processing=true and now it's always the same behaviour. - Dependent views now report reading progress to the context.
#25714
(
RaΓΊl MarΓn
). | {"source_file": "2021.md"} | [
0.09511278569698334,
0.03176469728350639,
-0.04627883806824684,
0.023318318650126457,
0.01116506289690733,
-0.04251967743039131,
0.03114001639187336,
0.09209498018026352,
-0.013668631203472614,
0.05963224917650223,
0.0082973288372159,
-0.0022345446050167084,
-0.055508047342300415,
-0.03214... |
1c92b2d9-b6e4-4049-bab4-2bc9ad6ae632 | Do connection draining asynchonously upon finishing executing distributed queries. A new server setting is added
max_threads_for_connection_collector
which specifies the number of workers to recycle connections in background. If the pool is full, connection will be drained synchronously but a bit different than before: It's drained after we send EOS to client, query will succeed immediately after receiving enough data, and any exception will be logged instead of throwing to the client. Added setting
drain_timeout
(3 seconds by default). Connection draining will disconnect upon timeout.
#25674
(
Amos Bird
).
Support for multiple includes in configuration. It is possible to include users configuration, remote servers configuration from multiple sources. Simply place
<include />
element with
from_zk
,
from_env
or
incl
attribute and it will be replaced with the substitution.
#24404
(
nvartolomei
).
Fix multiple block insertion into distributed table with
insert_distributed_one_random_shard = 1
. This is a marginal feature. Mark as improvement.
#23140
(
Amos Bird
).
Support
LowCardinality
and
FixedString
keys/values for
Map
type.
#21543
(
hexiaoting
).
Enable reloading of local disk config.
#19526
(
taiyang-li
).
Bug Fix {#bug-fix-2}
Fix a couple of bugs that may cause replicas to diverge.
#27808
(
tavplubix
).
Fix a rare bug in
DROP PART
which can lead to the error
Unexpected merged part intersects drop range
.
#27807
(
alesapin
).
Prevent crashes for some formats when NULL (tombstone) message was coming from Kafka. Closes
#19255
.
#27794
(
filimonov
).
Fix column filtering with union distinct in subquery. Closes
#27578
.
#27689
(
Kseniia Sumarokova
).
Fix bad type cast when functions like
arrayHas
are applied to arrays of LowCardinality of Nullable of different non-numeric types like
DateTime
and
DateTime64
. In previous versions bad cast occurs. In new version it will lead to exception. This closes
#26330
.
#27682
(
alexey-milovidov
).
Fix postgresql table function resulting in non-closing connections. Closes
#26088
.
#27662
(
Kseniia Sumarokova
).
Fixed another case of
Unexpected merged part ... intersecting drop range ...
error.
#27656
(
tavplubix
).
Fix an error with aliased column in
Distributed
table.
#27652
(
Vladimir C
).
After setting
max_memory_usage*
to non-zero value it was not possible to reset it back to 0 (unlimited). It's fixed.
#27638
(
tavplubix
).
Fixed underflow of the time value when constructing it from components. Closes
#27193
.
#27605
(
Vasily Nemkov
).
Fix crash during projection materialization when some parts contain missing columns. This fixes
#27512
.
#27528
(
Amos Bird
).
fix metric
BackgroundMessageBrokerSchedulePoolTask
, maybe mistyped.
#27452
(
Ben
).
Fix distributed queries with zero shards and aggregation.
#27427
(
Azat Khuzhin
). | {"source_file": "2021.md"} | [
0.019463999196887016,
0.048713333904743195,
0.01765175350010395,
0.08055424690246582,
-0.033976998180150986,
-0.11692782491445541,
0.042933352291584015,
-0.044211436063051224,
-0.020132847130298615,
0.055569037795066833,
-0.035926125943660736,
-0.0041140057146549225,
0.030472490936517715,
... |
e584ea56-ce06-446d-a685-061102c8bfcf | fix metric
BackgroundMessageBrokerSchedulePoolTask
, maybe mistyped.
#27452
(
Ben
).
Fix distributed queries with zero shards and aggregation.
#27427
(
Azat Khuzhin
).
Compatibility when
/proc/meminfo
does not contain KB suffix.
#27361
(
Mike Kot
).
Fix incorrect result for query with row-level security, PREWHERE and LowCardinality filter. Fixes
#27179
.
#27329
(
Nikolai Kochetov
).
Fixed incorrect validation of partition id for MergeTree tables that created with old syntax.
#27328
(
tavplubix
).
Fix MySQL protocol when using parallel formats (CSV / TSV).
#27326
(
RaΓΊl MarΓn
).
Fix
Cannot find column
error for queries with sampling. Was introduced in
#24574
. Fixes
#26522
.
#27301
(
Nikolai Kochetov
).
Fix errors like
Expected ColumnLowCardinality, gotUInt8
or
Bad cast from type DB::ColumnVector<char8_t> to DB::ColumnLowCardinality
for some queries with
LowCardinality
in
PREWHERE
. And more importantly, fix the lack of whitespace in the error message. Fixes
#23515
.
#27298
(
Nikolai Kochetov
).
Fix
distributed_group_by_no_merge = 2
with
distributed_push_down_limit = 1
or
optimize_distributed_group_by_sharding_key = 1
with
LIMIT BY
and
LIMIT OFFSET
.
#27249
(
Azat Khuzhin
). These are obscure combination of settings that no one is using.
Fix mutation stuck on invalid partitions in non-replicated MergeTree.
#27248
(
Azat Khuzhin
).
In case of ambiguity, lambda functions prefer its arguments to other aliases or identifiers.
#27235
(
RaΓΊl MarΓn
).
Fix column structure in merge join, close
#27091
.
#27217
(
Vladimir C
).
In rare cases
system.detached_parts
table might contain incorrect information for some parts, it's fixed. Fixes
#27114
.
#27183
(
tavplubix
).
Fix uninitialized memory in functions
multiSearch*
with empty array, close
#27169
.
#27181
(
Vladimir C
).
Fix synchronization in GRPCServer. This PR fixes
#27024
.
#27064
(
Vitaly Baranov
).
Fixed
cache
,
complex_key_cache
,
ssd_cache
,
complex_key_ssd_cache
configuration parsing. Options
allow_read_expired_keys
,
max_update_queue_size
,
update_queue_push_timeout_milliseconds
,
query_wait_timeout_milliseconds
were not parsed for dictionaries with non
cache
type.
#27032
(
Maksim Kita
).
Fix possible mutation stack due to race with DROP_RANGE.
#27002
(
Azat Khuzhin
).
Now partition ID in queries like
ALTER TABLE ... PARTITION ID xxx
validates for correctness. Fixes
#25718
.
#26963
(
alesapin
).
Fix "Unknown column name" error with multiple JOINs in some cases, close
#26899
.
#26957
(
Vladimir C
).
Fix reading of custom TLDs (stops processing with lower buffer or bigger file).
#26948
(
Azat Khuzhin
).
Fix error
Missing columns: 'xxx'
when
DEFAULT
column references other non materialized column without
DEFAULT
expression. Fixes
#26591
.
#26900
(
alesapin
). | {"source_file": "2021.md"} | [
-0.002353825606405735,
-0.03168749064207077,
-0.01691950298845768,
-0.018825972452759743,
-0.044638000428676605,
-0.08570340275764465,
0.04390253499150276,
0.03831031918525696,
0.0009676625486463308,
0.03127623349428177,
0.03135087713599205,
-0.0748644769191742,
0.06940444558858871,
-0.002... |
6fb19118-6b45-43d5-bb85-8c06ff8c50e8 | Fix error
Missing columns: 'xxx'
when
DEFAULT
column references other non materialized column without
DEFAULT
expression. Fixes
#26591
.
#26900
(
alesapin
).
Fix loading of dictionary keys in
library-bridge
for
library
dictionary source.
#26834
(
Kseniia Sumarokova
).
Aggregate function parameters might be lost when applying some combinators causing exceptions like
Conversion from AggregateFunction(topKArray, Array(String)) to AggregateFunction(topKArray(10), Array(String)) is not supported
. It's fixed. Fixes
#26196
and
#26433
.
#26814
(
tavplubix
).
Add
event_time_microseconds
value for
REMOVE_PART
in
system.part_log
. In previous versions is was not set.
#26720
(
Azat Khuzhin
).
Do not remove data on ReplicatedMergeTree table shutdown to avoid creating data to metadata inconsistency.
#26716
(
nvartolomei
).
Sometimes
SET ROLE
could work incorrectly, this PR fixes that.
#26707
(
Vitaly Baranov
).
Some fixes for parallel formatting (https://github.com/ClickHouse/ClickHouse/issues/26694).
#26703
(
RaΓΊl MarΓn
).
Fix potential nullptr dereference in window functions. This fixes
#25276
.
#26668
(
Alexander Kuzmenkov
).
Fix clickhouse-client history file conversion (when upgrading from the format of 3 years old version of clickhouse-client) if file is empty.
#26589
(
Azat Khuzhin
).
Fix incorrect function names of groupBitmapAnd/Or/Xor (can be displayed in some occasions). This fixes.
#26557
(
Amos Bird
).
Update
chown
cmd check in clickhouse-server docker entrypoint. It fixes the bug that cluster pod restart failed (or timeout) on Kubernetes.
#26545
(
Ky Li
).
Fix crash in
RabbitMQ
shutdown in case
RabbitMQ
setup was not started. Closes
#26504
.
#26529
(
Kseniia Sumarokova
).
Fix issues with
CREATE DICTIONARY
query if dictionary name or database name was quoted. Closes
#26491
.
#26508
(
Maksim Kita
).
Fix broken column name resolution after rewriting column aliases. This fixes
#26432
.
#26475
(
Amos Bird
).
Fix some fuzzed msan crash. Fixes
#22517
.
#26428
(
Nikolai Kochetov
).
Fix infinite non joined block stream in
partial_merge_join
close
#26325
.
#26374
(
Vladimir C
).
Fix possible crash when login as dropped user. This PR fixes
#26073
.
#26363
(
Vitaly Baranov
).
Fix
optimize_distributed_group_by_sharding_key
for multiple columns (leads to incorrect result w/
optimize_skip_unused_shards=1
/
allow_nondeterministic_optimize_skip_unused_shards=1
and multiple columns in sharding key expression).
#26353
(
Azat Khuzhin
).
Fixed rare bug in lost replica recovery that may cause replicas to diverge.
#26321
(
tavplubix
).
Fix zstd decompression (for import/export in zstd framing format that is unrelated to tables data) in case there are escape sequences at the end of internal buffer. Closes
#26013
.
#26314
(
Kseniia Sumarokova
).
Fix logical error on join with totals, close
#26017
.
#26250
(
Vladimir C
). | {"source_file": "2021.md"} | [
0.010873026214540005,
-0.004272486083209515,
-0.035462308675050735,
0.009995246306061745,
-0.049658916890621185,
-0.05611291900277138,
-0.0559491403400898,
0.024346662685275078,
0.0033411446493119,
0.05996207147836685,
0.05331045016646385,
-0.03763904795050621,
0.0037527792155742645,
-0.05... |
e7245b2f-74b9-4196-afca-344eb27bf8a4 | Fix logical error on join with totals, close
#26017
.
#26250
(
Vladimir C
).
Remove excessive newline in
thread_name
column in
system.stack_trace
table. This fixes
#24124
.
#26210
(
alexey-milovidov
).
Fix potential crash if more than one
untuple
expression is used.
#26179
(
alexey-milovidov
).
Don't throw exception in
toString
for Nullable Enum if Enum does not have a value for zero, close
#25806
.
#26123
(
Vladimir C
).
Fixed incorrect
sequence_id
in MySQL protocol packets that ClickHouse sends on exception during query execution. It might cause MySQL client to reset connection to ClickHouse server. Fixes
#21184
.
#26051
(
tavplubix
).
Fix for the case that
cutToFirstSignificantSubdomainCustom()
/
cutToFirstSignificantSubdomainCustomWithWWW()
/
firstSignificantSubdomainCustom()
returns incorrect type for consts, and hence
optimize_skip_unused_shards
does not work:.
#26041
(
Azat Khuzhin
).
Fix possible mismatched header when using normal projection with prewhere. This fixes
#26020
.
#26038
(
Amos Bird
).
Fix sharding_key from column w/o function for remote() (before
select * from remote('127.1', system.one, dummy)
leads to
Unknown column: dummy, there are only columns .
error).
#25824
(
Azat Khuzhin
).
Fixed
Not found column ...
and
Missing column ...
errors when selecting from
MaterializeMySQL
. Fixes
#23708
,
#24830
,
#25794
.
#25822
(
tavplubix
).
Fix
optimize_skip_unused_shards_rewrite_in
for non-UInt64 types (may select incorrect shards eventually or throw
Cannot infer type of an empty tuple
or
Function tuple requires at least one argument
).
#25798
(
Azat Khuzhin
).
Build/Testing/Packaging Improvement {#buildtestingpackaging-improvement-3}
Now we ran stateful and stateless tests in random timezones. Fixes
#12439
. Reading String as DateTime and writing DateTime as String in Protobuf format now respect timezone. Reading UInt16 as DateTime in Arrow and Parquet formats now treat it as Date and then converts to DateTime with respect to DateTime's timezone, because Date is serialized in Arrow and Parquet as UInt16. GraphiteMergeTree now respect time zone for rounding of times. Fixes
#5098
. Author: @alexey-milovidov.
#15408
(
alesapin
).
clickhouse-test
supports SQL tests with
Jinja2
templates.
#26579
(
Vladimir C
).
Add support for build with
clang-13
. This closes
#27705
.
#27714
(
alexey-milovidov
).
#27777
(
Sergei Semin
)
Add CMake options to build with or without specific CPU instruction set. This is for
#17469
and
#27509
.
#27508
(
alexey-milovidov
).
Fix linking of auxiliar programs when using dynamic libraries.
#26958
(
RaΓΊl MarΓn
).
Update RocksDB to
2021-07-16
master.
#26411
(
alexey-milovidov
).
ClickHouse release v21.8, 2021-08-12 {#clickhouse-release-v218-2021-08-12}
Upgrade Notes {#upgrade-notes} | {"source_file": "2021.md"} | [
-0.02255190908908844,
-0.04727182164788246,
0.0086599662899971,
-0.0390438437461853,
-0.053680408746004105,
-0.0431550107896328,
0.060273949056863785,
-0.020653771236538887,
-0.036501336842775345,
0.049439508467912674,
0.04714798927307129,
-0.03559480607509613,
0.09525363147258759,
-0.0583... |
f6080d5c-a0a2-492c-849b-14ce8dfcbc52 | Update RocksDB to
2021-07-16
master.
#26411
(
alexey-milovidov
).
ClickHouse release v21.8, 2021-08-12 {#clickhouse-release-v218-2021-08-12}
Upgrade Notes {#upgrade-notes}
New version is using
Map
data type for system logs tables (
system.query_log
,
system.query_thread_log
,
system.processes
,
system.opentelemetry_span_log
). These tables will be auto-created with new data types. Virtual columns are created to support old queries. Closes
#18698
.
#23934
,
#25773
(
hexiaoting
,
sundy-li
,
Maksim Kita
). If you want to
downgrade
from version 21.8 to older versions, you will need to cleanup system tables with logs manually. Look at
/var/lib/clickhouse/data/system/*_log
.
New Features {#new-features}
Add support for a part of SQL/JSON standard.
#24148
(
l1tsolaiki
,
Kseniia Sumarokova
).
Collect common system metrics (in
system.asynchronous_metrics
and
system.asynchronous_metric_log
) on CPU usage, disk usage, memory usage, IO, network, files, load average, CPU frequencies, thermal sensors, EDAC counters, system uptime; also added metrics about the scheduling jitter and the time spent collecting the metrics. It works similar to
atop
in ClickHouse and allows access to monitoring data even if you have no additional tools installed. Close
#9430
.
#24416
(
alexey-milovidov
,
Yegor Levankov
).
Add MaterializedPostgreSQL table engine and database engine. This database engine allows replicating a whole database or any subset of database tables.
#20470
(
Kseniia Sumarokova
).
Add new functions
leftPad()
,
rightPad()
,
leftPadUTF8()
,
rightPadUTF8()
.
#26075
(
Vitaly Baranov
).
Add the
FIRST
keyword to the
ADD INDEX
command to be able to add the index at the beginning of the indices list.
#25904
(
xjewer
).
Introduce
system.data_skipping_indices
table containing information about existing data skipping indices. Close
#7659
.
#25693
(
Dmitry Novik
).
Add
bin
/
unbin
functions.
#25609
(
zhaoyu
).
Support
Map
and
UInt128
,
Int128
,
UInt256
,
Int256
types in
mapAdd
and
mapSubtract
functions.
#25596
(
Ildus Kurbangaliev
).
Support
DISTINCT ON (columns)
expression, close
#25404
.
#25589
(
Zijie Lu
).
Add an ability to reset a custom setting to default and remove it from the table's metadata. It allows rolling back the change without knowing the system/config's default. Closes
#14449
.
#17769
(
xjewer
).
Render pipelines as graphs in Web UI if
EXPLAIN PIPELINE graph = 1
query is submitted.
#26067
(
alexey-milovidov
).
Performance Improvements {#performance-improvements}
Compile aggregate functions. Use option
compile_aggregate_expressions
to enable it.
#24789
(
Maksim Kita
).
Improve latency of short queries that require reading from tables with many columns.
#26371
(
Anton Popov
).
Improvements {#improvements} | {"source_file": "2021.md"} | [
0.04729169234633446,
-0.018747540190815926,
0.018197687342762947,
-0.007417832501232624,
-0.004013083875179291,
-0.0918085053563118,
-0.06174303591251373,
0.00036645520594902337,
-0.11564336717128754,
0.07991746813058853,
0.03059871308505535,
-0.0010752277448773384,
0.027820540592074394,
0... |
5741e323-ddb6-4a09-9531-c3e0c744f3a1 | Improve latency of short queries that require reading from tables with many columns.
#26371
(
Anton Popov
).
Improvements {#improvements}
Use
Map
data type for system logs tables (
system.query_log
,
system.query_thread_log
,
system.processes
,
system.opentelemetry_span_log
). These tables will be auto-created with new data types. Virtual columns are created to support old queries. Closes
#18698
.
#23934
,
#25773
(
hexiaoting
,
sundy-li
,
Maksim Kita
).
For a dictionary with a complex key containing only one attribute, allow not wrapping the key expression in tuple for functions
dictGet
,
dictHas
.
#26130
(
Maksim Kita
).
Implement function
bin
/
hex
from
AggregateFunction
states.
#26094
(
zhaoyu
).
Support arguments of
UUID
type for
empty
and
notEmpty
functions.
UUID
is empty if it is all zeros (nil UUID). Closes
#3446
.
#25974
(
zhaoyu
).
Add support for
SET SQL_SELECT_LIMIT
in MySQL protocol. Closes
#17115
.
#25972
(
Kseniia Sumarokova
).
More instrumentation for network interaction: add counters for recv/send bytes; add gauges for recvs/sends. Added missing documentation. Close
#5897
.
#25962
(
alexey-milovidov
).
Add setting
optimize_move_to_prewhere_if_final
. If query has
FINAL
, the optimization
move_to_prewhere
will be enabled only if both
optimize_move_to_prewhere
and
optimize_move_to_prewhere_if_final
are enabled. Closes
#8684
.
#25940
(
Kseniia Sumarokova
).
Allow complex quoted identifiers of JOINed tables. Close
#17861
.
#25924
(
alexey-milovidov
).
Add support for Unicode (e.g. Chinese, Cyrillic) components in
Nested
data types. Close
#25594
.
#25923
(
alexey-milovidov
).
Allow
quantiles*
functions to work with
aggregate_functions_null_for_empty
. Close
#25892
.
#25919
(
alexey-milovidov
).
Allow parameters for parametric aggregate functions to be arbitrary constant expressions (e.g.,
1 + 2
), not just literals. It also allows using the query parameters (in parameterized queries like
{param:UInt8}
) inside parametric aggregate functions. Closes
#11607
.
#25910
(
alexey-milovidov
).
Correctly throw the exception on the attempt to parse an invalid
Date
. Closes
#6481
.
#25909
(
alexey-milovidov
).
Support for multiple includes in configuration. It is possible to include users configuration, remote server configuration from multiple sources. Simply place
<include />
element with
from_zk
,
from_env
or
incl
attribute, and it will be replaced with the substitution.
#24404
(
nvartolomei
).
Support for queries with a column named
"null"
(it must be specified in back-ticks or double quotes) and
ON CLUSTER
. Closes
#24035
.
#25907
(
alexey-milovidov
).
Support
LowCardinality
,
Decimal
, and
UUID
for
JSONExtract
. Closes
#24606
.
#25900
(
Kseniia Sumarokova
).
Convert history file from
readline
format to
replxx
format.
#25888
(
Azat Khuzhin
). | {"source_file": "2021.md"} | [
0.04086780920624733,
0.04904228821396828,
-0.051977578550577164,
0.0014735986478626728,
-0.007440211717039347,
-0.11600330471992493,
0.08302470296621323,
0.01779031567275524,
-0.053564466536045074,
0.06511544436216354,
0.04863738268613815,
-0.031704410910606384,
0.08338285982608795,
-0.059... |
dd8436ef-456b-4b3f-b979-73c1aab90c16 | Convert history file from
readline
format to
replxx
format.
#25888
(
Azat Khuzhin
).
Fix an issue which can lead to intersecting parts after
DROP PART
or background deletion of an empty part.
#25884
(
alesapin
).
Better handling of lost parts for
ReplicatedMergeTree
tables. Fixes rare inconsistencies in
ReplicationQueue
. Fixes
#10368
.
#25820
(
alesapin
).
Allow starting clickhouse-client with unreadable working directory.
#25817
(
ianton-ru
).
Fix "No available columns" error for
Merge
storage.
#25801
(
Azat Khuzhin
).
MySQL Engine now supports the exchange of column comments between MySQL and ClickHouse.
#25795
(
Storozhuk Kostiantyn
).
Fix inconsistent behaviour of
GROUP BY
constant on empty set. Closes
#6842
.
#25786
(
Kseniia Sumarokova
).
Cancel already running merges in partition on
DROP PARTITION
and
TRUNCATE
for
ReplicatedMergeTree
. Resolves
#17151
.
#25684
(
tavplubix
).
Support ENUM` data type for MaterializeMySQL.
#25676
(
Storozhuk Kostiantyn
).
Support materialized and aliased columns in JOIN, close
#13274
.
#25634
(
Vladimir C
).
Fix possible logical race condition between
ALTER TABLE ... DETACH
and background merges.
#25605
(
Azat Khuzhin
).
Make
NetworkReceiveElapsedMicroseconds
metric to correctly include the time spent waiting for data from the client to
INSERT
. Close
#9958
.
#25602
(
alexey-milovidov
).
Support
TRUNCATE TABLE
for S3 and HDFS. Close
#25530
.
#25550
(
Kseniia Sumarokova
).
Support for dynamic reloading of config to change number of threads in pool for background jobs execution (merges, mutations, fetches).
#25548
(
Nikita Mikhaylov
).
Allow extracting of non-string element as string using
JSONExtract
. This is for
#25414
.
#25452
(
Amos Bird
).
Support regular expression in
Database
argument for
StorageMerge
. Close
#776
.
#25064
(
flynn
).
Web UI: if the value looks like a URL, automatically generate a link.
#25965
(
alexey-milovidov
).
Make
sudo service clickhouse-server start
to work on systems with
systemd
like Centos 8. Close
#14298
. Close
#17799
.
#25921
(
alexey-milovidov
).
Bug Fixes {#bug-fixes-1}
Fix incorrect
SET ROLE
in some cases.
#26707
(
Vitaly Baranov
).
Fix potential
nullptr
dereference in window functions. Fix
#25276
.
#26668
(
Alexander Kuzmenkov
).
Fix incorrect function names of
groupBitmapAnd/Or/Xor
. Fix
#26557
(
Amos Bird
).
Fix crash in RabbitMQ shutdown in case RabbitMQ setup was not started. Closes
#26504
.
#26529
(
Kseniia Sumarokova
).
Fix issues with
CREATE DICTIONARY
query if dictionary name or database name was quoted. Closes
#26491
.
#26508
(
Maksim Kita
).
Fix broken name resolution after rewriting column aliases. Fix
#26432
.
#26475
(
Amos Bird
).
Fix infinite non-joined block stream in
partial_merge_join
close
#26325
.
#26374
(
Vladimir C
). | {"source_file": "2021.md"} | [
-0.054464153945446014,
-0.07238420844078064,
-0.032234422862529755,
-0.02295294962823391,
-0.06155062094330788,
-0.0673144981265068,
0.0027544498443603516,
-0.006096990779042244,
0.017889225855469704,
0.039893437176942825,
0.06413683295249939,
0.01753084920346737,
0.10758129507303238,
-0.0... |
5f0852b2-f275-48a5-9fa7-4246945ee78d | Fix infinite non-joined block stream in
partial_merge_join
close
#26325
.
#26374
(
Vladimir C
).
Fix possible crash when login as dropped user. Fix
#26073
.
#26363
(
Vitaly Baranov
).
Fix
optimize_distributed_group_by_sharding_key
for multiple columns (leads to incorrect result w/
optimize_skip_unused_shards=1
/
allow_nondeterministic_optimize_skip_unused_shards=1
and multiple columns in sharding key expression).
#26353
(
Azat Khuzhin
).
CAST
from
Date
to
DateTime
(or
DateTime64
) was not using the timezone of the
DateTime
type. It can also affect the comparison between
Date
and
DateTime
. Inference of the common type for
Date
and
DateTime
also was not using the corresponding timezone. It affected the results of function
if
and array construction. Closes
#24128
.
#24129
(
Maksim Kita
).
Fixed rare bug in lost replica recovery that may cause replicas to diverge.
#26321
(
tavplubix
).
Fix zstd decompression in case there are escape sequences at the end of internal buffer. Closes
#26013
.
#26314
(
Kseniia Sumarokova
).
Fix logical error on join with totals, close
#26017
.
#26250
(
Vladimir C
).
Remove excessive newline in
thread_name
column in
system.stack_trace
table. Fix
#24124
.
#26210
(
alexey-milovidov
).
Fix
joinGet
with
LowCarinality
columns, close
#25993
.
#26118
(
Vladimir C
).
Fix possible crash in
pointInPolygon
if the setting
validate_polygons
is turned off.
#26113
(
alexey-milovidov
).
Fix throwing exception when iterate over non-existing remote directory.
#26087
(
ianton-ru
).
Fix rare server crash because of
abort
in ZooKeeper client. Fixes
#25813
.
#26079
(
alesapin
).
Fix wrong thread count estimation for right subquery join in some cases. Close
#24075
.
#26052
(
Vladimir C
).
Fixed incorrect
sequence_id
in MySQL protocol packets that ClickHouse sends on exception during query execution. It might cause MySQL client to reset connection to ClickHouse server. Fixes
#21184
.
#26051
(
tavplubix
).
Fix possible mismatched header when using normal projection with
PREWHERE
. Fix
#26020
.
#26038
(
Amos Bird
).
Fix formatting of type
Map
with integer keys to
JSON
.
#25982
(
Anton Popov
).
Fix possible deadlock during query profiler stack unwinding. Fix
#25968
.
#25970
(
Maksim Kita
).
Fix crash on call
dictGet()
with bad arguments.
#25913
(
Vitaly Baranov
).
Fixed
scram-sha-256
authentication for PostgreSQL engines. Closes
#24516
.
#25906
(
Kseniia Sumarokova
).
Fix extremely long backoff for background tasks when the background pool is full. Fixes
#25836
.
#25893
(
alesapin
).
Fix ARM exception handling with non default page size. Fixes
#25512
,
#25044
,
#24901
,
#23183
,
#20221
,
#19703
,
#19028
,
#18391
,
#18121
,
#17994
,
#12483
.
#25854
(
Maksim Kita
). | {"source_file": "2021.md"} | [
-0.02086718939244747,
0.019199151545763016,
0.012551534920930862,
-0.006674707401543856,
0.0356924906373024,
-0.04720465838909149,
-0.053197380155324936,
0.05050808936357498,
-0.024408375844359398,
-0.003727484494447708,
0.006766149308532476,
0.002728447550907731,
0.023171011358499527,
0.0... |
90725852-40d9-4136-9daa-caa54cafa556 | Fix ARM exception handling with non default page size. Fixes
#25512
,
#25044
,
#24901
,
#23183
,
#20221
,
#19703
,
#19028
,
#18391
,
#18121
,
#17994
,
#12483
.
#25854
(
Maksim Kita
).
Fix sharding_key from column w/o function for
remote()
(before
select * from remote('127.1', system.one, dummy)
leads to
Unknown column: dummy, there are only columns .
error).
#25824
(
Azat Khuzhin
).
Fixed
Not found column ...
and
Missing column ...
errors when selecting from
MaterializeMySQL
. Fixes
#23708
,
#24830
,
#25794
.
#25822
(
tavplubix
).
Fix
optimize_skip_unused_shards_rewrite_in
for non-UInt64 types (may select incorrect shards eventually or throw
Cannot infer type of an empty tuple
or
Function tuple requires at least one argument
).
#25798
(
Azat Khuzhin
).
Fix rare bug with
DROP PART
query for
ReplicatedMergeTree
tables which can lead to error message
Unexpected merged part intersecting drop range
.
#25783
(
alesapin
).
Fix bug in
TTL
with
GROUP BY
expression which refuses to execute
TTL
after first execution in part.
#25743
(
alesapin
).
Allow StorageMerge to access tables with aliases. Closes
#6051
.
#25694
(
Kseniia Sumarokova
).
Fix slow dict join in some cases, close
#24209
.
#25618
(
Vladimir C
).
Fix
ALTER MODIFY COLUMN
of columns, which participates in TTL expressions.
#25554
(
Anton Popov
).
Fix assertion in
PREWHERE
with non-UInt8 type, close
#19589
.
#25484
(
Vladimir C
).
Fix some fuzzed msan crash. Fixes
#22517
.
#26428
(
Nikolai Kochetov
).
Update
chown
cmd check in
clickhouse-server
docker entrypoint. It fixes error 'cluster pod restart failed (or timeout)' on Kubernetes.
#26545
(
Ky Li
).
ClickHouse release v21.7, 2021-07-09 {#clickhouse-release-v217-2021-07-09}
Backward Incompatible Change {#backward-incompatible-change-4}
Improved performance of queries with explicitly defined large sets. Added compatibility setting
legacy_column_name_of_tuple_literal
. It makes sense to set it to
true
, while doing rolling update of cluster from version lower than 21.7 to any higher version. Otherwise distributed queries with explicitly defined sets at
IN
clause may fail during update.
#25371
(
Anton Popov
).
Forward/backward incompatible change of maximum buffer size in clickhouse-keeper (an experimental alternative to ZooKeeper). Better to do it now (before production), than later.
#25421
(
alesapin
).
New Feature {#new-feature-4}
Support configuration in YAML format as alternative to XML. This closes
#3607
.
#21858
(
BoloniniD
).
Provides a way to restore replicated table when the data is (possibly) present, but the ZooKeeper metadata is lost. Resolves
#13458
.
#13652
(
Mike Kot
).
Support structs and maps in Arrow/Parquet/ORC and dictionaries in Arrow input/output formats. Present new setting
output_format_arrow_low_cardinality_as_dictionary
.
#24341
(
Kruglov Pavel
). | {"source_file": "2021.md"} | [
0.00837023090571165,
0.021479902788996696,
0.024702034890651703,
0.03294214978814125,
-0.0021999115124344826,
-0.061917081475257874,
-0.04764997586607933,
-0.016585174947977066,
-0.05010085925459862,
0.09673743695020676,
0.03952416405081749,
-0.04857254773378372,
0.08581254631280899,
-0.04... |
26cfd5ce-16b4-4b04-829d-62caf94aa5e1 | Support structs and maps in Arrow/Parquet/ORC and dictionaries in Arrow input/output formats. Present new setting
output_format_arrow_low_cardinality_as_dictionary
.
#24341
(
Kruglov Pavel
).
Added support for
Array
type in dictionaries.
#25119
(
Maksim Kita
).
Added function
bitPositionsToArray
. Closes
#23792
. Author [Kevin Wan] (@MaxWk).
#25394
(
Maksim Kita
).
Added function
dateName
to return names like 'Friday' or 'April'. Author [Daniil Kondratyev] (@dankondr).
#25372
(
Maksim Kita
).
Add
toJSONString
function to serialize columns to their JSON representations.
#25164
(
Amos Bird
).
Now
query_log
has two new columns:
initial_query_start_time
,
initial_query_start_time_microsecond
that record the starting time of a distributed query if any.
#25022
(
Amos Bird
).
Add aggregate function
segmentLengthSum
.
#24250
(
flynn
).
Add a new boolean setting
prefer_global_in_and_join
which defaults all IN/JOIN as GLOBAL IN/JOIN.
#23434
(
Amos Bird
).
Support
ALTER DELETE
queries for
Join
table engine.
#23260
(
foolchi
).
Add
quantileBFloat16
aggregate function as well as the corresponding
quantilesBFloat16
and
medianBFloat16
. It is very simple and fast quantile estimator with relative error not more than 0.390625%. This closes
#16641
.
#23204
(
Ivan Novitskiy
).
Implement
sequenceNextNode()
function useful for
flow analysis
.
#19766
(
achimbab
).
Experimental Feature {#experimental-feature-4}
Add support for virtual filesystem over HDFS.
#11058
(
overshov
) (
Kseniia Sumarokova
).
Now clickhouse-keeper (an experimental alternative to ZooKeeper) supports ZooKeeper-like
digest
ACLs.
#24448
(
alesapin
).
Performance Improvement {#performance-improvement-4}
Added optimization that transforms some functions to reading of subcolumns to reduce amount of read data. E.g., statement
col IS NULL
is transformed to reading of subcolumn
col.null
. Optimization can be enabled by setting
optimize_functions_to_subcolumns
which is currently off by default.
#24406
(
Anton Popov
).
Rewrite more columns to possible alias expressions. This may enable better optimization, such as projections.
#24405
(
Amos Bird
).
Index of type
bloom_filter
can be used for expressions with
hasAny
function with constant arrays. This closes:
#24291
.
#24900
(
Vasily Nemkov
).
Add exponential backoff to reschedule read attempt in case RabbitMQ queues are empty. (ClickHouse has support for importing data from RabbitMQ). Closes
#24340
.
#24415
(
Kseniia Sumarokova
).
Improvement {#improvement-4} | {"source_file": "2021.md"} | [
0.047649845480918884,
0.031226949766278267,
-0.01810157112777233,
-0.0003281509270891547,
-0.09528584033250809,
-0.032075461000204086,
-0.009079420007765293,
-0.003915865905582905,
-0.018912779167294502,
-0.03257623687386513,
-0.01799592934548855,
0.008648160845041275,
-0.02989611029624939,
... |
b2567c79-ab27-4963-bc69-96cbc601a20d | Improvement {#improvement-4}
Allow to limit bandwidth for replication. Add two Replicated*MergeTree settings:
max_replicated_fetches_network_bandwidth
and
max_replicated_sends_network_bandwidth
which allows to limit maximum speed of replicated fetches/sends for table. Add two server-wide settings (in
default
user profile):
max_replicated_fetches_network_bandwidth_for_server
and
max_replicated_sends_network_bandwidth_for_server
which limit maximum speed of replication for all tables. The settings are not followed perfectly accurately. Turned off by default. Fixes
#1821
.
#24573
(
alesapin
).
Resource constraints and isolation for ODBC and Library bridges. Use separate
clickhouse-bridge
group and user for bridge processes. Set oom_score_adj so the bridges will be first subjects for OOM killer. Set set maximum RSS to 1 GiB. Closes
#23861
.
#25280
(
Kseniia Sumarokova
).
Add standalone
clickhouse-keeper
symlink to the main
clickhouse
binary. Now it's possible to run coordination without the main clickhouse server.
#24059
(
alesapin
).
Use global settings for query to
VIEW
. Fixed the behavior when queries to
VIEW
use local settings, that leads to errors if setting on
CREATE VIEW
and
SELECT
were different. As for now,
VIEW
won't use these modified settings, but you can still pass additional settings in
SETTINGS
section of
CREATE VIEW
query. Close
#20551
.
#24095
(
Vladimir
).
On server start, parts with incorrect partition ID would not be ever removed, but always detached.
#25070
.
#25166
(
Nikolai Kochetov
).
Increase size of background schedule pool to 128 (
background_schedule_pool_size
setting). It allows avoiding replication queue hung on slow zookeeper connection.
#25072
(
alesapin
).
Add merge tree setting
max_parts_to_merge_at_once
which limits the number of parts that can be merged in the background at once. Doesn't affect
OPTIMIZE FINAL
query. Fixes
#1820
.
#24496
(
alesapin
).
Allow
NOT IN
operator to be used in partition pruning.
#24894
(
Amos Bird
).
Recognize IPv4 addresses like
127.0.1.1
as local. This is controversial and closes
#23504
. Michael Filimonov will test this feature.
#24316
(
alexey-milovidov
).
ClickHouse database created with MaterializeMySQL (it is an experimental feature) now contains all column comments from the MySQL database that materialized.
#25199
(
Storozhuk Kostiantyn
).
Add settings (
connection_auto_close
/
connection_max_tries
/
connection_pool_size
) for MySQL storage engine.
#24146
(
Azat Khuzhin
).
Improve startup time of Distributed engine.
#25663
(
Azat Khuzhin
).
Improvement for Distributed tables. Drop replicas from dirname for internal_replication=true (allows INSERT into Distributed with cluster from any number of replicas, before only 15 replicas was supported, everything more will fail with ENAMETOOLONG while creating directory for async blocks).
#25513
(
Azat Khuzhin
). | {"source_file": "2021.md"} | [
0.0030362627003341913,
-0.06030534580349922,
-0.06811602413654327,
0.024393970146775246,
-0.06214374676346779,
-0.12916114926338196,
-0.022537847980856895,
0.010602366179227829,
-0.02582971379160881,
0.07124099880456924,
-0.06363199651241302,
0.03359512239694595,
0.05578605830669403,
-0.07... |
3e49c7fd-e4c8-4024-bf15-ad53f15b5913 | Added support
Interval
type for
LowCardinality
. It is needed for intermediate values of some expressions. Closes
#21730
.
#25410
(
Vladimir
).
Add
==
operator on time conditions for
sequenceMatch
and
sequenceCount
functions. For eg: sequenceMatch('(?1)(?t==1)(?2)')(time, data = 1, data = 2).
#25299
(
Christophe Kalenzaga
).
Add settings
http_max_fields
,
http_max_field_name_size
,
http_max_field_value_size
.
#25296
(
Ivan
).
Add support for function
if
with
Decimal
and
Int
types on its branches. This closes
#20549
. This closes
#10142
.
#25283
(
alexey-milovidov
).
Update prompt in
clickhouse-client
and display a message when reconnecting. This closes
#10577
.
#25281
(
alexey-milovidov
).
Correct memory tracking in aggregate function
topK
. This closes
#25259
.
#25260
(
alexey-milovidov
).
Fix
topLevelDomain
for IDN hosts (i.e.
example.ΡΡ
), before it returns empty string for such hosts.
#25103
(
Azat Khuzhin
).
Detect Linux kernel version at runtime (for worked nested epoll, that is required for
async_socket_for_remote
/
use_hedged_requests
, otherwise remote queries may stuck).
#25067
(
Azat Khuzhin
).
For distributed query, when
optimize_skip_unused_shards=1
, allow to skip shard with condition like
(sharding key) IN (one-element-tuple)
. (Tuples with many elements were supported. Tuple with single element did not work because it is parsed as literal).
#24930
(
Amos Bird
).
Improved log messages of S3 errors, no more double whitespaces in case of empty keys and buckets.
#24897
(
Vladimir Chebotarev
).
Some queries require multi-pass semantic analysis. Try reusing built sets for
IN
in this case.
#24874
(
Amos Bird
).
Respect
max_distributed_connections
for
insert_distributed_sync
(otherwise for huge clusters and sync insert it may run out of
max_thread_pool_size
).
#24754
(
Azat Khuzhin
).
Avoid hiding errors like
Limit for rows or bytes to read exceeded
for scalar subqueries.
#24545
(
nvartolomei
).
Make String-to-Int parser stricter so that
toInt64('+')
will throw.
#24475
(
Amos Bird
).
If
SSD_CACHE
is created with DDL query, it can be created only inside
user_files
directory.
#24466
(
Maksim Kita
).
PostgreSQL support for specifying non default schema for insert queries. Closes
#24149
.
#24413
(
Kseniia Sumarokova
).
Fix IPv6 addresses resolving (i.e. fixes
select * from remote('[::1]', system.one)
).
#24319
(
Azat Khuzhin
).
Fix trailing whitespaces in FROM clause with subqueries in multiline mode, and also changes the output of the queries slightly in a more human friendly way.
#24151
(
Azat Khuzhin
).
Improvement for Distributed tables. Add ability to split distributed batch on failures (i.e. due to memory limits, corruptions), under
distributed_directory_monitor_split_batch_on_failure
(OFF by default).
#23864
(
Azat Khuzhin
). | {"source_file": "2021.md"} | [
-0.019517721608281136,
-0.01229509711265564,
-0.06179598718881607,
-0.027695903554558754,
-0.07215917855501175,
-0.03667718917131424,
-0.0023150488268584013,
0.05641988292336464,
0.02479424513876438,
0.0012078648433089256,
-0.02752291038632393,
-0.012978563085198402,
0.01886848732829094,
-... |
3740e339-cef5-47c3-8728-ad292e122c4d | Handle column name clashes for
Join
table engine. Closes
#20309
.
#23769
(
Vladimir
).
Display progress for
File
table engine in
clickhouse-local
and on INSERT query in
clickhouse-client
when data is passed to stdin. Closes
#18209
.
#23656
(
Kseniia Sumarokova
).
Bugfixes and improvements of
clickhouse-copier
. Allow to copy tables with different (but compatible schemas). Closes
#9159
. Added test to copy ReplacingMergeTree. Closes
#22711
. Support TTL on columns and Data Skipping Indices. It simply removes it to create internal Distributed table (underlying table will have TTL and skipping indices). Closes
#19384
. Allow to copy MATERIALIZED and ALIAS columns. There are some cases in which it could be helpful (e.g. if this column is in PRIMARY KEY). Now it could be allowed by setting
allow_to_copy_alias_and_materialized_columns
property to true in task configuration. Closes
#9177
. Closes [#11007] (https://github.com/ClickHouse/ClickHouse/issues/11007). Closes
#9514
. Added a property
allow_to_drop_target_partitions
in task configuration to drop partition in original table before moving helping tables. Closes
#20957
. Get rid of
OPTIMIZE DEDUPLICATE
query. This hack was needed, because
ALTER TABLE MOVE PARTITION
was retried many times and plain MergeTree tables don't have deduplication. Closes
#17966
. Write progress to ZooKeeper node on path
task_path + /status
in JSON format. Closes
#20955
. Support for ReplicatedTables without arguments. Closes
#24834
.
#23518
(
Nikita Mikhaylov
).
Added sleep with backoff between read retries from S3.
#23461
(
Vladimir Chebotarev
).
Respect
insert_allow_materialized_columns
(allows materialized columns) for INSERT into
Distributed
table.
#23349
(
Azat Khuzhin
).
Add ability to push down LIMIT for distributed queries.
#23027
(
Azat Khuzhin
).
Fix zero-copy replication with several S3 volumes (Fixes
#22679
).
#22864
(
ianton-ru
).
Resolve the actual port number bound when a user requests any available port from the operating system to show it in the log message.
#25569
(
bnaecker
).
Fixed case, when sometimes conversion of postgres arrays resulted in String data type, not n-dimensional array, because
attndims
works incorrectly in some cases. Closes
#24804
.
#25538
(
Kseniia Sumarokova
).
Fix convertion of DateTime with timezone for MySQL, PostgreSQL, ODBC. Closes
#5057
.
#25528
(
Kseniia Sumarokova
).
Distinguish KILL MUTATION for different tables (fixes unexpected
Cancelled mutating parts
error).
#25025
(
Azat Khuzhin
).
Allow to declare S3 disk at root of bucket (S3 virtual filesystem is an experimental feature under development).
#24898
(
Vladimir Chebotarev
).
Enable reading of subcolumns (e.g. components of Tuples) for distributed tables.
#24472
(
Anton Popov
).
A feature for MySQL compatibility protocol: make
user
function to return correct output. Closes
#25697
.
#25697
(
sundyli
). | {"source_file": "2021.md"} | [
-0.05891845375299454,
-0.025169670581817627,
-0.08787062764167786,
0.03915200009942055,
-0.04085196182131767,
-0.038986921310424805,
-0.01509881392121315,
0.004516168497502804,
-0.03786768764257431,
0.034928079694509506,
0.03263245150446892,
-0.0009439178393222392,
0.00008850026642903686,
... |
01729329-0f9e-4ee4-a724-fca429f48e3f | A feature for MySQL compatibility protocol: make
user
function to return correct output. Closes
#25697
.
#25697
(
sundyli
).
Bug Fix {#bug-fix-3}
Improvement for backward compatibility. Use old modulo function version when used in partition key. Closes
#23508
.
#24157
(
Kseniia Sumarokova
).
Fix extremely rare bug on low-memory servers which can lead to the inability to perform merges without restart. Possibly fixes
#24603
.
#24872
(
alesapin
).
Fix extremely rare error
Tagging already tagged part
in replication queue during concurrent
alter move/replace partition
. Possibly fixes
#22142
.
#24961
(
alesapin
).
Fix potential crash when calculating aggregate function states by aggregation of aggregate function states of other aggregate functions (not a practical use case). See
#24523
.
#25015
(
alexey-milovidov
).
Fixed the behavior when query
SYSTEM RESTART REPLICA
or
SYSTEM SYNC REPLICA
does not finish. This was detected on server with extremely low amount of RAM.
#24457
(
Nikita Mikhaylov
).
Fix bug which can lead to ZooKeeper client hung inside clickhouse-server.
#24721
(
alesapin
).
If ZooKeeper connection was lost and replica was cloned after restoring the connection, its replication queue might contain outdated entries. Fixed failed assertion when replication queue contains intersecting virtual parts. It may rarely happen if some data part was lost. Print error in log instead of terminating.
#24777
(
tavplubix
).
Fix lost
WHERE
condition in expression-push-down optimization of query plan (setting
query_plan_filter_push_down = 1
by default). Fixes
#25368
.
#25370
(
Nikolai Kochetov
).
Fix bug which can lead to intersecting parts after merges with TTL:
Part all_40_40_0 is covered by all_40_40_1 but should be merged into all_40_41_1. This shouldn't happen often.
.
#25549
(
alesapin
).
On ZooKeeper connection loss
ReplicatedMergeTree
table might wait for background operations to complete before trying to reconnect. It's fixed, now background operations are stopped forcefully.
#25306
(
tavplubix
).
Fix error
Key expression contains comparison between inconvertible types
for queries with
ARRAY JOIN
in case if array is used in primary key. Fixes
#8247
.
#25546
(
Anton Popov
).
Fix wrong totals for query
WITH TOTALS
and
WITH FILL
. Fixes
#20872
.
#25539
(
Anton Popov
).
Fix data race when querying
system.clusters
while reloading the cluster configuration at the same time.
#25737
(
Amos Bird
).
Fixed
No such file or directory
error on moving
Distributed
table between databases. Fixes
#24971
.
#25667
(
tavplubix
).
REPLACE PARTITION
might be ignored in rare cases if the source partition was empty. It's fixed. Fixes
#24869
.
#25665
(
tavplubix
).
Fixed a bug in
Replicated
database engine that might rarely cause some replica to skip enqueued DDL query.
#24805
(
tavplubix
). | {"source_file": "2021.md"} | [
-0.009525603614747524,
-0.03622552007436752,
0.022236427292227745,
-0.016069533303380013,
-0.01200832799077034,
-0.08270537108182907,
-0.02327633462846279,
0.04675488919019699,
0.009131243452429771,
0.039947882294654846,
-0.006036085542291403,
0.045356132090091705,
0.07478886097669601,
0.0... |
f13ac5d3-0f11-472c-88d4-0e62d72a891e | Fixed a bug in
Replicated
database engine that might rarely cause some replica to skip enqueued DDL query.
#24805
(
tavplubix
).
Fix null pointer dereference in
EXPLAIN AST
without query.
#25631
(
Nikolai Kochetov
).
Fix waiting of automatic dropping of empty parts. It could lead to full filling of background pool and stuck of replication.
#23315
(
Anton Popov
).
Fix restore of a table stored in S3 virtual filesystem (it is an experimental feature not ready for production).
#25601
(
ianton-ru
).
Fix nullptr dereference in
Arrow
format when using
Decimal256
. Add
Decimal256
support for
Arrow
format.
#25531
(
Kruglov Pavel
).
Fix excessive underscore before the names of the preprocessed configuration files.
#25431
(
Vitaly Baranov
).
A fix for
clickhouse-copier
tool: Fix segfault when sharding_key is absent in task config for copier.
#25419
(
Nikita Mikhaylov
).
Fix
REPLACE
column transformer when used in DDL by correctly quoting the formated query. This fixes
#23925
.
#25391
(
Amos Bird
).
Fix the possibility of non-deterministic behaviour of the
quantileDeterministic
function and similar. This closes
#20480
.
#25313
(
alexey-milovidov
).
Support
SimpleAggregateFunction(LowCardinality)
for
SummingMergeTree
. Fixes
#25134
.
#25300
(
Nikolai Kochetov
).
Fix logical error with exception message "Cannot sum Array/Tuple in min/maxMap".
#25298
(
Kruglov Pavel
).
Fix error
Bad cast from type DB::ColumnLowCardinality to DB::ColumnVector<char8_t>
for queries where
LowCardinality
argument was used for IN (this bug appeared in 21.6). Fixes
#25187
.
#25290
(
Nikolai Kochetov
).
Fix incorrect behaviour of
joinGetOrNull
with not-nullable columns. This fixes
#24261
.
#25288
(
Amos Bird
).
Fix incorrect behaviour and UBSan report in big integers. In previous versions
CAST(1e19 AS UInt128)
returned zero.
#25279
(
alexey-milovidov
).
Fixed an error which occurred while inserting a subset of columns using CSVWithNames format. Fixes
#25129
.
#25169
(
Nikita Mikhaylov
).
Do not use table's projection for
SELECT
with
FINAL
. It is not supported yet.
#25163
(
Amos Bird
).
Fix possible parts loss after updating up to 21.5 in case table used
UUID
in partition key. (It is not recommended to use
UUID
in partition key). Fixes
#25070
.
#25127
(
Nikolai Kochetov
).
Fix crash in query with cross join and
joined_subquery_requires_alias = 0
. Fixes
#24011
.
#25082
(
Nikolai Kochetov
).
Fix bug with constant maps in mapContains function that lead to error
empty column was returned by function mapContains
. Closes
#25077
.
#25080
(
Kruglov Pavel
).
Remove possibility to create tables with columns referencing themselves like
a UInt32 ALIAS a + 1
or
b UInt32 MATERIALIZED b
. Fixes
#24910
,
#24292
.
#25059
(
alesapin
). | {"source_file": "2021.md"} | [
-0.10814899206161499,
-0.08539662510156631,
0.005002270452678204,
-0.02597404643893242,
-0.0378628633916378,
-0.0695176050066948,
-0.05610865727066994,
-0.0714152604341507,
-0.031159402802586555,
0.005419883411377668,
0.0628325343132019,
0.014502852223813534,
0.07171116024255753,
-0.087568... |
5625b1f8-0d82-41df-9175-787518f36cd2 | Remove possibility to create tables with columns referencing themselves like
a UInt32 ALIAS a + 1
or
b UInt32 MATERIALIZED b
. Fixes
#24910
,
#24292
.
#25059
(
alesapin
).
Fix wrong result when using aggregate projection with
not empty
GROUP BY
key to execute query with
GROUP BY
by
empty
key.
#25055
(
Amos Bird
).
Fix serialization of splitted nested messages in Protobuf format. This PR fixes
#24647
.
#25000
(
Vitaly Baranov
).
Fix limit/offset settings for distributed queries (ignore on the remote nodes).
#24940
(
Azat Khuzhin
).
Fix possible heap-buffer-overflow in
Arrow
format.
#24922
(
Kruglov Pavel
).
Fixed possible error 'Cannot read from istream at offset 0' when reading a file from DiskS3 (S3 virtual filesystem is an experimental feature under development that should not be used in production).
#24885
(
Pavel Kovalenko
).
Fix "Missing columns" exception when joining distributed materialized view.
#24870
(
Azat Khuzhin
).
Allow
NULL
values in postgresql compatibility protocol. Closes
#22622
.
#24857
(
Kseniia Sumarokova
).
Fix bug when exception
Mutation was killed
can be thrown to the client on mutation wait when mutation not loaded into memory yet.
#24809
(
alesapin
).
Fixed bug in deserialization of random generator state with might cause some data types such as
AggregateFunction(groupArraySample(N), T))
to behave in a non-deterministic way.
#24538
(
tavplubix
).
Disallow building uniqXXXXStates of other aggregation states.
#24523
(
RaΓΊl MarΓn
). Then allow it back by actually eliminating the root cause of the related issue. (
alexey-milovidov
).
Fix usage of tuples in
CREATE .. AS SELECT
queries.
#24464
(
Anton Popov
).
Fix computation of total bytes in
Buffer
table. In current ClickHouse version total_writes.bytes counter decreases too much during the buffer flush. It leads to counter overflow and totalBytes return something around 17.44 EB some time after the flush.
#24450
(
DimasKovas
).
Fix incorrect information about the monotonicity of toWeek function. This fixes
#24422
. This bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/5212 , and was exposed later by smarter partition pruner.
#24446
(
Amos Bird
).
When user authentication is managed by LDAP. Fixed potential deadlock that can happen during LDAP role (re)mapping, when LDAP group is mapped to a nonexistent local role.
#24431
(
Denis Glazachev
).
In "multipart/form-data" message consider the CRLF preceding a boundary as part of it. Fixes
#23905
.
#24399
(
Ivan
).
Fix drop partition with intersect fake parts. In rare cases there might be parts with mutation version greater than current block number.
#24321
(
Amos Bird
).
Fixed a bug in moving materialized view from Ordinary to Atomic database (
RENAME TABLE
query). Now inner table is moved to new database together with materialized view. Fixes
#23926
.
#24309
(
tavplubix
). | {"source_file": "2021.md"} | [
0.016708804294466972,
-0.03881002217531204,
-0.12092992663383484,
-0.0006746386061422527,
-0.0627262070775032,
-0.0510835237801075,
-0.046939730644226074,
0.02499217540025711,
0.016836820170283318,
0.08106132596731186,
0.023969464004039764,
-0.039121996611356735,
0.024511151015758514,
-0.0... |
6669033f-0741-4792-8888-e09f9ce43da9 | Allow empty HTTP headers. Fixes
#23901
.
#24285
(
Ivan
).
Correct processing of mutations (ALTER UPDATE/DELETE) in Memory tables. Closes
#24274
.
#24275
(
flynn
).
Make column LowCardinality property in JOIN output the same as in the input, close
#23351
, close
#20315
.
#24061
(
Vladimir
).
A fix for Kafka tables. Fix the bug in failover behavior when Engine = Kafka was not able to start consumption if the same consumer had an empty assignment previously. Closes
#21118
.
#21267
(
filimonov
).
Build/Testing/Packaging Improvement {#buildtestingpackaging-improvement-4}
Add
darwin-aarch64
(Mac M1 / Apple Silicon) builds in CI
#25560
(
Ivan
) and put the links to the docs and website (
alexey-milovidov
).
Adds cross-platform embedding of binary resources into executables. It works on Illumos.
#25146
(
bnaecker
).
Add join related options to stress tests to improve fuzzing.
#25200
(
Vladimir
).
Enable build with s3 module in osx
#25217
.
#25218
(
kevin wan
).
Add integration test cases to cover JDBC bridge.
#25047
(
Zhichun Wu
).
Integration tests configuration has special treatment for dictionaries. Removed remaining dictionaries manual setup.
#24728
(
Ilya Yatsishin
).
Add libfuzzer tests for YAMLParser class.
#24480
(
BoloniniD
).
Ubuntu 20.04 is now used to run integration tests, docker-compose version used to run integration tests is updated to 1.28.2. Environment variables now take effect on docker-compose. Rework test_dictionaries_all_layouts_separate_sources to allow parallel run.
#20393
(
Ilya Yatsishin
).
Fix TOCTOU error in installation script.
#25277
(
alexey-milovidov
).
ClickHouse release 21.6, 2021-06-05 {#clickhouse-release-216-2021-06-05}
Backward Incompatible Change {#backward-incompatible-change-5}
uniqState / uniqHLL12State / uniqCombinedState / uniqCombined64State produce incompatible states with
UUID
type.
#33607
.
Upgrade Notes {#upgrade-notes-1}
zstd
compression library is updated to v1.5.0. You may get messages about "checksum does not match" in replication. These messages are expected due to update of compression algorithm and you can ignore them. These messages are informational and do not indicate any kinds of undesired behaviour.
The setting
compile_expressions
is enabled by default. Although it has been heavily tested on variety of scenarios, if you find some undesired behaviour on your servers, you can try turning this setting off.
Values of
UUID
type cannot be compared with integer. For example, instead of writing
uuid != 0
type
uuid != '00000000-0000-0000-0000-000000000000'
.
New Feature {#new-feature-5}
Add Postgres-like cast operator (
::
). E.g.:
[1, 2]::Array(UInt8)
,
0.1::Decimal(4, 4)
,
number::UInt16
.
#23871
(
Anton Popov
). | {"source_file": "2021.md"} | [
-0.03089604154229164,
-0.003100138623267412,
-0.10240305960178375,
0.023507695645093918,
0.014868827536702156,
-0.10638464987277985,
-0.06080639734864235,
0.03703133389353752,
-0.05361975356936455,
0.030655890703201294,
-0.0012991372495889664,
-0.12289216369390488,
-0.020654359832406044,
-... |
6321c886-1782-4202-a549-073219126a0a | New Feature {#new-feature-5}
Add Postgres-like cast operator (
::
). E.g.:
[1, 2]::Array(UInt8)
,
0.1::Decimal(4, 4)
,
number::UInt16
.
#23871
(
Anton Popov
).
Make big integers production ready. Add support for
UInt128
data type. Fix known issues with the
Decimal256
data type. Support big integers in dictionaries. Support
gcd
/
lcm
functions for big integers. Support big integers in array search and conditional functions. Support
LowCardinality(UUID)
. Support big integers in
generateRandom
table function and
clickhouse-obfuscator
. Fix error with returning
UUID
from scalar subqueries. This fixes
#7834
. This fixes
#23936
. This fixes
#4176
. This fixes
#24018
. Backward incompatible change: values of
UUID
type cannot be compared with integer. For example, instead of writing
uuid != 0
type
uuid != '00000000-0000-0000-0000-000000000000'
.
#23631
(
alexey-milovidov
).
Support
Array
data type for inserting and selecting data in
Arrow
,
Parquet
and
ORC
formats.
#21770
(
taylor12805
).
Implement table comments. Closes
#23225
.
#23548
(
flynn
).
Support creating dictionaries with DDL queries in
clickhouse-local
. Closes
#22354
. Added support for
DETACH DICTIONARY PERMANENTLY
. Added support for
EXCHANGE DICTIONARIES
for
Atomic
database engine. Added support for moving dictionaries between databases using
RENAME DICTIONARY
.
#23436
(
Maksim Kita
).
Add aggregate function
uniqTheta
to support
Theta Sketch
in ClickHouse.
#23894
.
#22609
(
Ping Yu
).
Add function
splitByRegexp
.
#24077
(
abel-cheng
).
Add function
arrayProduct
which accept an array as the parameter, and return the product of all the elements in array. Closes
#21613
.
#23782
(
Maksim Kita
).
Add
thread_name
column in
system.stack_trace
. This closes
#23256
.
#24124
(
abel-cheng
).
If
insert_null_as_default
= 1, insert default values instead of NULL in
INSERT ... SELECT
and
INSERT ... SELECT ... UNION ALL ...
queries. Closes
#22832
.
#23524
(
Kseniia Sumarokova
).
Add support for progress indication in
clickhouse-local
with
--progress
option.
#23196
(
Egor Savin
).
Add support for HTTP compression (determined by
Content-Encoding
HTTP header) in
http
dictionary source. This fixes
#8912
.
#23946
(
FArthur-cmd
).
Added
SYSTEM QUERY RELOAD MODEL
,
SYSTEM QUERY RELOAD MODELS
. Closes
#18722
.
#23182
(
Maksim Kita
).
Add setting
json
(boolean, 0 by default) for
EXPLAIN PLAN
query. When enabled, query output will be a single
JSON
row. It is recommended to use
TSVRaw
format to avoid unnecessary escaping.
#23082
(
Nikolai Kochetov
).
Add setting
indexes
(boolean, disabled by default) to
EXPLAIN PIPELINE
query. When enabled, shows used indexes, number of filtered parts and granules for every index applied. Supported for
MergeTree*
tables.
#22352
(
Nikolai Kochetov
). | {"source_file": "2021.md"} | [
-0.00925331749022007,
0.06350206583738327,
-0.02490074373781681,
-0.022990234196186066,
-0.038860250264406204,
-0.07376745343208313,
0.019275717437267303,
0.00832271296530962,
-0.11984877288341522,
-0.011394268833100796,
-0.03994312509894371,
-0.028313394635915756,
-0.0035036085173487663,
... |
8f57c38c-41b6-45e2-ba63-b140fc719450 | LDAP: implemented user DN detection functionality to use when mapping Active Directory groups to ClickHouse roles.
#22228
(
Denis Glazachev
).
New aggregate function
deltaSumTimestamp
for summing the difference between consecutive rows while maintaining ordering during merge by storing timestamps.
#21888
(
Russ Frank
).
Added less secure IMDS credentials provider for S3 which works under docker correctly.
#21852
(
Vladimir Chebotarev
).
Add back
indexHint
function. This is for
#21238
. This reverts
#9542
. This fixes
#9540
.
#21304
(
Amos Bird
).
Experimental Feature {#experimental-feature-5}
Add
PROJECTION
support for
MergeTree*
tables.
#20202
(
Amos Bird
).
Performance Improvement {#performance-improvement-5}
Enable
compile_expressions
setting by default. When this setting enabled, compositions of simple functions and operators will be compiled to native code with LLVM at runtime.
#8482
(
Maksim Kita
,
alexey-milovidov
). Note: if you feel in trouble, turn this option off.
Update
re2
library. Performance of regular expressions matching is improved. Also this PR adds compatibility with gcc-11.
#24196
(
RaΓΊl MarΓn
).
ORC input format reading by stripe instead of reading entire table into memory by once which is cost memory when file size is huge.
#23102
(
Chao Ma
).
Fusion of aggregate functions
sum
,
count
and
avg
in a query into single aggregate function. The optimization is controlled with the
optimize_fuse_sum_count_avg
setting. This is implemented with a new aggregate function
sumCount
. This function returns a tuple of two fields:
sum
and
count
.
#21337
(
hexiaoting
).
Update
zstd
to v1.5.0. The performance of compression is improved for single digits percentage.
#24135
(
RaΓΊl MarΓn
). Note: you may get messages about "checksum does not match" in replication. These messages are expected due to update of compression algorithm and you can ignore them.
Improved performance of
Buffer
tables: do not acquire lock for total_bytes/total_rows for
Buffer
engine.
#24066
(
Azat Khuzhin
).
Preallocate support for hashed/sparse_hashed dictionaries is returned.
#23979
(
Azat Khuzhin
).
Enable
async_socket_for_remote
by default (lower amount of threads in querying Distributed tables with large fanout).
#23683
(
Nikolai Kochetov
).
Improvement {#improvement-5}
Add
_partition_value
virtual column to MergeTree table family. It can be used to prune partition in a deterministic way. It's needed to implement partition matcher for mutations.
#23673
(
Amos Bird
).
Added
region
parameter for S3 storage and disk.
#23846
(
Vladimir Chebotarev
).
Allow configuring different log levels for different logging channels. Closes
#19569
.
#23857
(
filimonov
). | {"source_file": "2021.md"} | [
-0.03789455071091652,
-0.06784691661596298,
-0.0005586018087342381,
-0.052512526512145996,
0.002306208247318864,
-0.050893768668174744,
0.0083474051207304,
-0.03728614002466202,
-0.027418142184615135,
0.024704566225409508,
0.0006711493479087949,
-0.011159510351717472,
0.017773566767573357,
... |
11dfee73-b54f-448b-aa97-0575d4503f7b | Added
region
parameter for S3 storage and disk.
#23846
(
Vladimir Chebotarev
).
Allow configuring different log levels for different logging channels. Closes
#19569
.
#23857
(
filimonov
).
Keep default timezone on
DateTime
operations if it was not provided explicitly. For example, if you add one second to a value of
DateTime
type without timezone it will remain
DateTime
without timezone. In previous versions the value of default timezone was placed to the returned data type explicitly so it becomes DateTime('something'). This closes
#4854
.
#23392
(
alexey-milovidov
).
Allow user to specify empty string instead of database name for
MySQL
storage. Default database will be used for queries. In previous versions it was working for SELECT queries and not support for INSERT was also added. This closes
#19281
. This can be useful working with
Sphinx
or other MySQL-compatible foreign databases.
#23319
(
alexey-milovidov
).
Fixed
quantile(s)TDigest
. Added special handling of singleton centroids according to tdunning/t-digest 3.2+. Also a bug with over-compression of centroids in implementation of earlier version of the algorithm was fixed.
#23314
(
Vladimir Chebotarev
).
Function
now64
now supports optional timezone argument.
#24091
(
Vasily Nemkov
).
Fix the case when a progress bar in interactive mode in clickhouse-client that appear in the middle of the data may rewrite some parts of visible data in terminal. This closes
#19283
.
#23050
(
alexey-milovidov
).
Fix crash when memory allocation fails in simdjson. https://github.com/simdjson/simdjson/pull/1567 . Mark as improvement because it's a very rare bug.
#24147
(
Amos Bird
).
Preserve dictionaries until storage shutdown (this will avoid possible
external dictionary 'DICT' not found
errors at server shutdown during final flush of the
Buffer
engine).
#24068
(
Azat Khuzhin
).
Flush
Buffer
tables before shutting down tables (within one database), to avoid discarding blocks due to underlying table had been already detached (and
Destination table default.a_data_01870 doesn't exist. Block of data is discarded
error in the log).
#24067
(
Azat Khuzhin
).
Now
prefer_column_name_to_alias = 1
will also favor column names for
group by
,
having
and
order by
. This fixes
#23882
.
#24022
(
Amos Bird
).
Add support for
ORDER BY WITH FILL
with
DateTime64
.
#24016
(
kevin wan
).
Enable
DateTime64
to be a version column in
ReplacingMergeTree
.
#23992
(
kevin wan
).
Log information about OS name, kernel version and CPU architecture on server startup.
#23988
(
Azat Khuzhin
).
Support specifying table schema for
postgresql
dictionary source. Closes
#23958
.
#23980
(
Kseniia Sumarokova
).
Add hints for names of
Enum
elements (suggest names in case of typos). Closes
#17112
.
#23919
(
flynn
). | {"source_file": "2021.md"} | [
0.008499415591359138,
-0.0808567926287651,
-0.05851994827389717,
0.002753519220277667,
-0.02840096317231655,
-0.031674109399318695,
0.013042245991528034,
0.0027939309366047382,
0.06569906324148178,
0.059828951954841614,
0.026068998500704765,
-0.03389064222574234,
0.08565569669008255,
0.064... |
7fc2df03-5415-4bec-8558-1f39e0d20a04 | Add hints for names of
Enum
elements (suggest names in case of typos). Closes
#17112
.
#23919
(
flynn
).
Measure found rate (the percentage for which the value was found) for dictionaries (see
found_rate
in
system.dictionaries
).
#23916
(
Azat Khuzhin
).
Allow to add specific queue settings via table settng
rabbitmq_queue_settings_list
. (Closes
#23737
and
#23918
). Allow user to control all RabbitMQ setup: if table setting
rabbitmq_queue_consume
is set to
1
- RabbitMQ table engine will only connect to specified queue and will not perform any RabbitMQ consumer-side setup like declaring exchange, queues, bindings. (Closes
#21757
). Add proper cleanup when RabbitMQ table is dropped - delete queues, which the table has declared and all bound exchanges - if they were created by the table.
#23887
(
Kseniia Sumarokova
).
Add
broken_data_files
/
broken_data_compressed_bytes
into
system.distribution_queue
. Add metric for number of files for asynchronous insertion into Distributed tables that has been marked as broken (
BrokenDistributedFilesToInsert
).
#23885
(
Azat Khuzhin
).
Querying
system.tables
does not go to ZooKeeper anymore.
#23793
(
Fuwang Hu
).
Respect
lock_acquire_timeout_for_background_operations
for
OPTIMIZE
queries.
#23623
(
Azat Khuzhin
).
Possibility to change
S3
disk settings in runtime via new
SYSTEM RESTART DISK
SQL command.
#23429
(
Pavel Kovalenko
).
If user applied a misconfiguration by mistakenly setting
max_distributed_connections
to value zero, every query to a
Distributed
table will throw exception with a message containing "logical error". But it's really an expected behaviour, not a logical error, so the exception message was slightly incorrect. It also triggered checks in our CI enviroment that ensures that no logical errors ever happen. Instead we will treat
max_distributed_connections
misconfigured to zero as the minimum possible value (one).
#23348
(
Azat Khuzhin
).
Disable
min_bytes_to_use_mmap_io
by default.
#23322
(
Azat Khuzhin
).
Support
LowCardinality
nullability with
join_use_nulls
, close
#15101
.
#23237
(
vdimir
).
Added possibility to restore
MergeTree
parts to
detached
directory for
S3
disk.
#23112
(
Pavel Kovalenko
).
Retries on HTTP connection drops in S3.
#22988
(
Vladimir Chebotarev
).
Add settings
external_storage_max_read_rows
and
external_storage_max_read_rows
for MySQL table engine, dictionary source and MaterializeMySQL minor data fetches.
#22697
(
TCeason
).
MaterializeMySQL
(experimental feature): Previously, MySQL 5.7.9 was not supported due to SQL incompatibility. Now leave MySQL parameter verification to the MaterializeMySQL.
#23413
(
TCeason
).
Enable reading of subcolumns for distributed tables.
#24472
(
Anton Popov
).
Fix usage of tuples in
CREATE .. AS SELECT
queries.
#24464
(
Anton Popov
).
Support for
Parquet
format in
Kafka
tables.
#23412
(
Chao Ma
). | {"source_file": "2021.md"} | [
0.052707914263010025,
-0.06500497460365295,
-0.04274697229266167,
0.07162120193243027,
-0.10736837983131409,
-0.050320107489824295,
0.1200844794511795,
-0.011427643708884716,
-0.05850493535399437,
0.13648930191993713,
0.03531442955136299,
-0.05953723564743996,
0.04849540442228317,
-0.03863... |
79ac56f1-e415-405d-948d-a4a34dfe4d10 | Fix usage of tuples in
CREATE .. AS SELECT
queries.
#24464
(
Anton Popov
).
Support for
Parquet
format in
Kafka
tables.
#23412
(
Chao Ma
).
Bug Fix {#bug-fix-4}
Use old modulo function version when used in partition key and primary key. Closes
#23508
.
#24157
(
Kseniia Sumarokova
). It was a source of backward incompatibility in previous releases.
Fixed the behavior when query
SYSTEM RESTART REPLICA
or
SYSTEM SYNC REPLICA
is being processed infinitely. This was detected on server with extremely little amount of RAM.
#24457
(
Nikita Mikhaylov
).
Fix incorrect monotonicity of
toWeek
function. This fixes
#24422
. This bug was introduced in
#5212
, and was exposed later by smarter partition pruner.
#24446
(
Amos Bird
).
Fix drop partition with intersect fake parts. In rare cases there might be parts with mutation version greater than current block number.
#24321
(
Amos Bird
).
Fixed a bug in moving materialized view from Ordinary to Atomic database (
RENAME TABLE
query). Now inner table is moved to new database together with materialized view. Fixes
#23926
.
#24309
(
tavplubix
).
Allow empty HTTP headers in client requests. Fixes
#23901
.
#24285
(
Ivan
).
Set
max_threads = 1
to fix mutation fail of
Memory
tables. Closes
#24274
.
#24275
(
flynn
).
Fix typo in implementation of
Memory
tables, this bug was introduced at
#15127
. Closes
#24192
.
#24193
(
εΌ δΈε
).
Fix abnormal server termination due to
HDFS
becoming not accessible during query execution. Closes
#24117
.
#24191
(
Kseniia Sumarokova
).
Fix crash on updating of
Nested
column with const condition.
#24183
(
hexiaoting
).
Fix race condition which could happen in RBAC under a heavy load. This PR fixes
#24090
,
#24134
,.
#24176
(
Vitaly Baranov
).
Fix a rare bug that could lead to a partially initialized table that can serve write requests (insert/alter/so on). Now such tables will be in readonly mode.
#24122
(
alesapin
).
Fix an issue:
EXPLAIN PIPELINE
with
SELECT xxx FINAL
showed a wrong pipeline. (
hexiaoting
).
Fixed using const
DateTime
value vs
DateTime64
column in
WHERE
.
#24100
(
Vasily Nemkov
).
Fix crash in merge JOIN, closes
#24010
.
#24013
(
vdimir
).
Some
ALTER PARTITION
queries might cause
Part A intersects previous part B
and
Unexpected merged part C intersecting drop range D
errors in replication queue. It's fixed. Fixes
#23296
.
#23997
(
tavplubix
).
Fix SIGSEGV for external GROUP BY and overflow row (i.e. queries like
SELECT FROM GROUP BY WITH TOTALS SETTINGS max_bytes_before_external_group_by>0, max_rows_to_group_by>0, group_by_overflow_mode='any', totals_mode='before_having'
).
#23962
(
Azat Khuzhin
).
Fix keys metrics accounting for
CACHE
dictionary with duplicates in the source (leads to
DictCacheKeysRequestedMiss
overflows).
#23929
(
Azat Khuzhin
). | {"source_file": "2021.md"} | [
-0.08753623813390732,
-0.03246498852968216,
0.008769476786255836,
-0.01932249218225479,
-0.014121240936219692,
-0.08813723176717758,
-0.02649552933871746,
-0.009154378436505795,
0.0458918958902359,
0.06242036446928978,
0.03642040118575096,
0.007578563876450062,
-0.03304455056786537,
-0.075... |
821e88c0-8041-40b5-8df5-3625a246e24f | Fix keys metrics accounting for
CACHE
dictionary with duplicates in the source (leads to
DictCacheKeysRequestedMiss
overflows).
#23929
(
Azat Khuzhin
).
Fix implementation of connection pool of
PostgreSQL
engine. Closes
#23897
.
#23909
(
Kseniia Sumarokova
).
Fix
distributed_group_by_no_merge = 2
with
GROUP BY
and aggregate function wrapped into regular function (had been broken in
#23546
). Throw exception in case of someone trying to use
distributed_group_by_no_merge = 2
with window functions. Disable
optimize_distributed_group_by_sharding_key
for queries with window functions.
#23906
(
Azat Khuzhin
).
A fix for
s3
table function: better handling of HTTP errors. Response bodies of HTTP errors were being ignored earlier.
#23844
(
Vladimir Chebotarev
).
A fix for
s3
table function: better handling of URI's. Fixed an incompatibility with URLs containing
+
symbol, data with such keys could not be read previously.
#23822
(
Vladimir Chebotarev
).
Fix error
Can't initialize pipeline with empty pipe
for queries with
GLOBAL IN/JOIN
and
use_hedged_requests
. Fixes
#23431
.
#23805
(
Nikolai Kochetov
).
Fix
CLEAR COLUMN
does not work when it is referenced by materialized view. Close
#23764
.
#23781
(
flynn
).
Fix heap use after free when reading from HDFS if
Values
format is used.
#23761
(
Kseniia Sumarokova
).
Avoid possible "Cannot schedule a task" error (in case some exception had been occurred) on INSERT into Distributed.
#23744
(
Azat Khuzhin
).
Fixed a bug in recovery of staled
ReplicatedMergeTree
replica. Some metadata updates could be ignored by staled replica if
ALTER
query was executed during downtime of the replica.
#23742
(
tavplubix
).
Fix a bug with
Join
and
WITH TOTALS
, close
#17718
.
#23549
(
vdimir
).
Fix possible
Block structure mismatch
error for queries with
UNION
which could possibly happen after filter-pushdown optimization. Fixes
#23029
.
#23359
(
Nikolai Kochetov
).
Add type conversion when the setting
optimize_skip_unused_shards_rewrite_in
is enabled. This fixes MSan report.
#23219
(
Azat Khuzhin
).
Add a missing check when updating nested subcolumns, close issue:
#22353
.
#22503
(
hexiaoting
).
Build/Testing/Packaging Improvement {#buildtestingpackaging-improvement-5}
Support building on Illumos.
#24144
. Adds support for building on Solaris-derived operating systems.
#23746
(
bnaecker
).
Add more benchmarks for hash tables, including the Swiss Table from Google (that appeared to be slower than ClickHouse hash map in our specific usage scenario).
#24111
(
Maksim Kita
).
Update librdkafka 1.6.0-RC3 to 1.6.1.
#23874
(
filimonov
).
Always enable
asynchronous-unwind-tables
explicitly. It may fix query profiler on AArch64.
#23602
(
alexey-milovidov
).
Avoid possible build dependency on locale and filesystem order. This allows reproducible builds.
#23600
(
alexey-milovidov
). | {"source_file": "2021.md"} | [
-0.054992347955703735,
-0.031480543315410614,
-0.024183737114071846,
0.01855752058327198,
-0.08997505158185959,
-0.09697125852108002,
-0.004415949806571007,
-0.0715668722987175,
0.021255861967802048,
0.0279881302267313,
0.0456087701022625,
0.03223288059234619,
0.008090350776910782,
-0.0808... |
2996e37b-1df1-4b7d-9bfa-ccef07117c8b | Avoid possible build dependency on locale and filesystem order. This allows reproducible builds.
#23600
(
alexey-milovidov
).
Remove a source of nondeterminism from build. Now builds at different point of time will produce byte-identical binaries. Partially addressed
#22113
.
#23559
(
alexey-milovidov
).
Add simple tool for benchmarking (Zoo)Keeper.
#23038
(
alesapin
).
ClickHouse release 21.5, 2021-05-20 {#clickhouse-release-215-2021-05-20}
Backward Incompatible Change {#backward-incompatible-change-6}
Change comparison of integers and floating point numbers when integer is not exactly representable in the floating point data type. In new version comparison will return false as the rounding error will occur. Example:
9223372036854775808.0 != 9223372036854775808
, because the number
9223372036854775808
is not representable as floating point number exactly (and
9223372036854775808.0
is rounded to
9223372036854776000.0
). But in previous version the comparison will return as the numbers are equal, because if the floating point number
9223372036854776000.0
get converted back to UInt64, it will yield
9223372036854775808
. For the reference, the Python programming language also treats these numbers as equal. But this behaviour was dependend on CPU model (different results on AMD64 and AArch64 for some out-of-range numbers), so we make the comparison more precise. It will treat int and float numbers equal only if int is represented in floating point type exactly.
#22595
(
alexey-milovidov
).
Remove support for
argMin
and
argMax
for single
Tuple
argument. The code was not memory-safe. The feature was added by mistake and it is confusing for people. These functions can be reintroduced under different names later. This fixes
#22384
and reverts
#17359
.
#23393
(
alexey-milovidov
).
New Feature {#new-feature-6}
Added functions
dictGetChildren(dictionary, key)
,
dictGetDescendants(dictionary, key, level)
. Function
dictGetChildren
return all children as an array if indexes. It is a inverse transformation for
dictGetHierarchy
. Function
dictGetDescendants
return all descendants as if
dictGetChildren
was applied
level
times recursively. Zero
level
value is equivalent to infinity. Improved performance of
dictGetHierarchy
,
dictIsIn
functions. Closes
#14656
.
#22096
(
Maksim Kita
).
Added function
dictGetOrNull
. It works like
dictGet
, but return
Null
in case key was not found in dictionary. Closes
#22375
.
#22413
(
Maksim Kita
).
Added a table function
s3Cluster
, which allows to process files from
s3
in parallel on every node of a specified cluster.
#22012
(
Nikita Mikhaylov
).
Added support for replicas and shards in MySQL/PostgreSQL table engine / table function. You can write
SELECT * FROM mysql('host{1,2}-{1|2}', ...)
. Closes
#20969
.
#22217
(
Kseniia Sumarokova
). | {"source_file": "2021.md"} | [
-0.03982072323560715,
0.010864765383303165,
-0.02093874104321003,
-0.0401143878698349,
0.022747455164790154,
-0.13277432322502136,
-0.04353475198149681,
0.007286830805242062,
-0.0033482774160802364,
0.007848582230508327,
-0.02620684541761875,
-0.08234798908233643,
-0.03619701415300369,
0.0... |
f60a9e5e-4733-4dde-ba79-47af4d2ec535 | Added
ALTER TABLE ... FETCH PART ...
query. It's similar to
FETCH PARTITION
, but fetches only one part.
#22706
(
turbo jason
).
Added a setting
max_distributed_depth
that limits the depth of recursive queries to
Distributed
tables. Closes
#20229
.
#21942
(
flynn
).
Performance Improvement {#performance-improvement-6}
Improved performance of
intDiv
by dynamic dispatch for AVX2. This closes
#22314
.
#23000
(
alexey-milovidov
).
Improved performance of reading from
ArrowStream
input format for sources other then local file (e.g. URL).
#22673
(
nvartolomei
).
Disabled compression by default when interacting with localhost (with clickhouse-client or server to server with distributed queries) via native protocol. It may improve performance of some import/export operations. This closes
#22234
.
#22237
(
alexey-milovidov
).
Exclude values that does not belong to the shard from right part of IN section for distributed queries (under
optimize_skip_unused_shards_rewrite_in
, enabled by default, since it still requires
optimize_skip_unused_shards
).
#21511
(
Azat Khuzhin
).
Improved performance of reading a subset of columns with File-like table engine and column-oriented format like Parquet, Arrow or ORC. This closes
#issue:20129
.
#21302
(
keenwolf
).
Allow to move more conditions to
PREWHERE
as it was before version 21.1 (adjustment of internal heuristics). Insufficient number of moved condtions could lead to worse performance.
#23397
(
Anton Popov
).
Improved performance of ODBC connections and fixed all the outstanding issues from the backlog. Using
nanodbc
library instead of
Poco::ODBC
. Closes
#9678
. Add support for DateTime64 and Decimal* for ODBC table engine. Closes
#21961
. Fixed issue with cyrillic text being truncated. Closes
#16246
. Added connection pools for odbc bridge.
#21972
(
Kseniia Sumarokova
).
Improvement {#improvement-6}
Increase
max_uri_size
(the maximum size of URL in HTTP interface) to 1 MiB by default. This closes
#21197
.
#22997
(
alexey-milovidov
).
Set
background_fetches_pool_size
to
8
that is better for production usage with frequent small insertions or slow ZooKeeper cluster.
#22945
(
alexey-milovidov
).
FlatDictionary added
initial_array_size
,
max_array_size
options.
#22521
(
Maksim Kita
).
Add new setting
non_replicated_deduplication_window
for non-replicated MergeTree inserts deduplication.
#22514
(
alesapin
).
Update paths to the
CatBoost
model configs in config reloading.
#22434
(
Kruglov Pavel
).
Added
Decimal256
type support in dictionaries.
Decimal256
is experimental feature. Closes
#20979
.
#22960
(
Maksim Kita
).
Enabled
async_socket_for_remote
by default (using less amount of OS threads for distributed queries).
#23683
(
Nikolai Kochetov
). | {"source_file": "2021.md"} | [
-0.0243107620626688,
-0.01600981317460537,
-0.0212786253541708,
0.031197702512145042,
-0.01430493127554655,
-0.09522068500518799,
-0.07102514058351517,
0.043302297592163086,
0.006844067480415106,
0.012172924354672432,
-0.057363756000995636,
0.05736424773931503,
0.013330139219760895,
-0.061... |
0cbdc0ab-5cf6-4d4b-8527-4bb1db106c7d | Enabled
async_socket_for_remote
by default (using less amount of OS threads for distributed queries).
#23683
(
Nikolai Kochetov
).
Fixed
quantile(s)TDigest
. Added special handling of singleton centroids according to tdunning/t-digest 3.2+. Also a bug with over-compression of centroids in implementation of earlier version of the algorithm was fixed.
#23314
(
Vladimir Chebotarev
).
Make function name
unhex
case insensitive for compatibility with MySQL.
#23229
(
alexey-milovidov
).
Implement functions
arrayHasAny
,
arrayHasAll
,
has
,
indexOf
,
countEqual
for generic case when types of array elements are different. In previous versions the functions
arrayHasAny
,
arrayHasAll
returned false and
has
,
indexOf
,
countEqual
thrown exception. Also add support for
Decimal
and big integer types in functions
has
and similar. This closes
#20272
.
#23044
(
alexey-milovidov
).
Raised the threshold on max number of matches in result of the function
extractAllGroupsHorizontal
.
#23036
(
Vasily Nemkov
).
Do not perform
optimize_skip_unused_shards
for cluster with one node.
#22999
(
Azat Khuzhin
).
Added ability to run clickhouse-keeper (experimental drop-in replacement to ZooKeeper) with SSL. Config settings
keeper_server.tcp_port_secure
can be used for secure interaction between client and keeper-server.
keeper_server.raft_configuration.secure
can be used to enable internal secure communication between nodes.
#22992
(
alesapin
).
Added ability to flush buffer only in background for
Buffer
tables.
#22986
(
Azat Khuzhin
).
When selecting from MergeTree table with NULL in WHERE condition, in rare cases, exception was thrown. This closes
#20019
.
#22978
(
alexey-milovidov
).
Fix error handling in Poco HTTP Client for AWS.
#22973
(
kreuzerkrieg
).
Respect
max_part_removal_threads
for
ReplicatedMergeTree
.
#22971
(
Azat Khuzhin
).
Fix obscure corner case of MergeTree settings inactive_parts_to_throw_insert = 0 with inactive_parts_to_delay_insert > 0.
#22947
(
Azat Khuzhin
).
dateDiff
now works with
DateTime64
arguments (even for values outside of
DateTime
range)
#22931
(
Vasily Nemkov
).
MaterializeMySQL (experimental feature): added an ability to replicate MySQL databases containing views without failing. This is accomplished by ignoring the views.
#22760
(
Christian
).
Allow RBAC row policy via postgresql protocol. Closes
#22658
. PostgreSQL protocol is enabled in configuration by default.
#22755
(
Kseniia Sumarokova
).
Add metric to track how much time is spend during waiting for Buffer layer lock.
#22725
(
Azat Khuzhin
).
Allow to use CTE in VIEW definition. This closes
#22491
.
#22657
(
Amos Bird
).
Clear the rest of the screen and show cursor in
clickhouse-client
if previous program has left garbage in terminal. This closes
#16518
.
#22634
(
alexey-milovidov
). | {"source_file": "2021.md"} | [
-0.02981238067150116,
-0.027957815676927567,
-0.05637180060148239,
-0.012038609944283962,
-0.09284680336713791,
-0.04413381963968277,
0.01495998352766037,
-0.03736945241689682,
0.02846500836312771,
0.029009651392698288,
-0.03588903322815895,
0.024147814139723778,
0.11682453006505966,
-0.02... |
3be457e4-9c19-4a70-8898-b920d80c9925 | Clear the rest of the screen and show cursor in
clickhouse-client
if previous program has left garbage in terminal. This closes
#16518
.
#22634
(
alexey-milovidov
).
Make
round
function to behave consistently on non-x86_64 platforms. Rounding half to nearest even (Banker's rounding) is used.
#22582
(
alexey-milovidov
).
Correctly check structure of blocks of data that are sending by Distributed tables.
#22325
(
Azat Khuzhin
).
Allow publishing Kafka errors to a virtual column of Kafka engine, controlled by the
kafka_handle_error_mode
setting.
#21850
(
fastio
).
Add aliases
simpleJSONExtract/simpleJSONHas
to
visitParam/visitParamExtract{UInt, Int, Bool, Float, Raw, String}
. Fixes
#21383
.
#21519
(
fastio
).
Add
clickhouse-library-bridge
for library dictionary source. Closes
#9502
.
#21509
(
Kseniia Sumarokova
).
Forbid to drop a column if it's referenced by materialized view. Closes
#21164
.
#21303
(
flynn
).
Support dynamic interserver credentials (rotating credentials without downtime).
#14113
(
johnskopis
).
Add support for Kafka storage with
Arrow
and
ArrowStream
format messages.
#23415
(
Chao Ma
).
Fixed missing semicolon in exception message. The user may find this exception message unpleasant to read.
#23208
(
alexey-milovidov
).
Fixed missing whitespace in some exception messages about
LowCardinality
type.
#23207
(
alexey-milovidov
).
Some values were formatted with alignment in center in table cells in
Markdown
format. Not anymore.
#23096
(
alexey-milovidov
).
Remove non-essential details from suggestions in clickhouse-client. This closes
#22158
.
#23040
(
alexey-milovidov
).
Correct calculation of
bytes_allocated
field in system.dictionaries for sparse_hashed dictionaries.
#22867
(
Azat Khuzhin
).
Fixed approximate total rows accounting for reverse reading from MergeTree.
#22726
(
Azat Khuzhin
).
Fix the case when it was possible to configure dictionary with clickhouse source that was looking to itself that leads to infinite loop. Closes
#14314
.
#22479
(
Maksim Kita
).
Bug Fix {#bug-fix-5}
Multiple fixes for hedged requests. Fixed an error
Can't initialize pipeline with empty pipe
for queries with
GLOBAL IN/JOIN
when the setting
use_hedged_requests
is enabled. Fixes
#23431
.
#23805
(
Nikolai Kochetov
). Fixed a race condition in hedged connections which leads to crash. This fixes
#22161
.
#22443
(
Kruglov Pavel
). Fix possible crash in case if
unknown packet
was received from remote query (with
async_socket_for_remote
enabled). Fixes
#21167
.
#23309
(
Nikolai Kochetov
).
Fixed the behavior when disabling
input_format_with_names_use_header
setting discards all the input with CSVWithNames format. This fixes
#22406
.
#23202
(
Nikita Mikhaylov
).
Fixed remote JDBC bridge timeout connection issue. Closes
#9609
.
#23771
(
Maksim Kita
,
alexey-milovidov
). | {"source_file": "2021.md"} | [
-0.01551881805062294,
-0.01439519040286541,
-0.10445536673069,
-0.015268472023308277,
-0.007679633796215057,
-0.07028656452894211,
-0.06368081271648407,
0.002177979564294219,
0.023544298484921455,
0.04477613419294357,
-0.00713721988722682,
-0.06889335811138153,
-0.01011661347001791,
-0.084... |
d653d518-5e58-4ab2-901d-d33492c44401 | Fixed remote JDBC bridge timeout connection issue. Closes
#9609
.
#23771
(
Maksim Kita
,
alexey-milovidov
).
Fix the logic of initial load of
complex_key_hashed
if
update_field
is specified. Closes
#23800
.
#23824
(
Maksim Kita
).
Fixed crash when
PREWHERE
and row policy filter are both in effect with empty result.
#23763
(
Amos Bird
).
Avoid possible "Cannot schedule a task" error (in case some exception had been occurred) on INSERT into Distributed.
#23744
(
Azat Khuzhin
).
Added an exception in case of completely the same values in both samples in aggregate function
mannWhitneyUTest
. This fixes
#23646
.
#23654
(
Nikita Mikhaylov
).
Fixed server fault when inserting data through HTTP caused an exception. This fixes
#23512
.
#23643
(
Nikita Mikhaylov
).
Fixed misinterpretation of some
LIKE
expressions with escape sequences.
#23610
(
alexey-milovidov
).
Fixed restart / stop command hanging. Closes
#20214
.
#23552
(
filimonov
).
Fixed
COLUMNS
matcher in case of multiple JOINs in select query. Closes
#22736
.
#23501
(
Maksim Kita
).
Fixed a crash when modifying column's default value when a column itself is used as
ReplacingMergeTree
's parameter.
#23483
(
hexiaoting
).
Fixed corner cases in vertical merges with
ReplacingMergeTree
. In rare cases they could lead to fails of merges with exceptions like
Incomplete granules are not allowed while blocks are granules size
.
#23459
(
Anton Popov
).
Fixed bug that does not allow cast from empty array literal, to array with dimensions greater than 1, e.g.
CAST([] AS Array(Array(String)))
. Closes
#14476
.
#23456
(
Maksim Kita
).
Fixed a bug when
deltaSum
aggregate function produced incorrect result after resetting the counter.
#23437
(
Russ Frank
).
Fixed
Cannot unlink file
error on unsuccessful creation of ReplicatedMergeTree table with multidisk configuration. This closes
#21755
.
#23433
(
tavplubix
).
Fixed incompatible constant expression generation during partition pruning based on virtual columns. This fixes https://github.com/ClickHouse/ClickHouse/pull/21401#discussion_r611888913.
#23366
(
Amos Bird
).
Fixed a crash when setting join_algorithm is set to 'auto' and Join is performed with a Dictionary. Close
#23002
.
#23312
(
Vladimir
).
Don't relax NOT conditions during partition pruning. This fixes
#23305
and
#21539
.
#23310
(
Amos Bird
).
Fixed very rare race condition on background cleanup of old blocks. It might cause a block not to be deduplicated if it's too close to the end of deduplication window.
#23301
(
tavplubix
).
Fixed very rare (distributed) race condition between creation and removal of ReplicatedMergeTree tables. It might cause exceptions like
node doesn't exist
on attempt to create replicated table. Fixes
#21419
.
#23294
(
tavplubix
).
Fixed simple key dictionary from DDL creation if primary key is not first attribute. Fixes
#23236
.
#23262
(
Maksim Kita
). | {"source_file": "2021.md"} | [
-0.04706934094429016,
0.008578352630138397,
-0.010413049720227718,
-0.01954525336623192,
-0.06288238614797592,
-0.01434564869850874,
-0.05287798121571541,
-0.005476104095578194,
0.02095184288918972,
0.09333658218383789,
-0.034221116453409195,
-0.0360565148293972,
0.027396412566304207,
-0.0... |
7d1dd2fd-f1bb-464e-8853-3e3f743eca3b | Fixed simple key dictionary from DDL creation if primary key is not first attribute. Fixes
#23236
.
#23262
(
Maksim Kita
).
Fixed reading from ODBC when there are many long column names in a table. Closes
#8853
.
#23215
(
Kseniia Sumarokova
).
MaterializeMySQL (experimental feature): fixed
Not found column
error when selecting from
MaterializeMySQL
with condition on key column. Fixes
#22432
.
#23200
(
tavplubix
).
Correct aliases handling if subquery was optimized to constant. Fixes
#22924
. Fixes
#10401
.
#23191
(
Maksim Kita
).
Server might fail to start if
data_type_default_nullable
setting is enabled in default profile, it's fixed. Fixes
#22573
.
#23185
(
tavplubix
).
Fixed a crash on shutdown which happened because of wrong accounting of current connections.
#23154
(
Vitaly Baranov
).
Fixed
Table .inner_id... doesn't exist
error when selecting from materialized view after detaching it from Atomic database and attaching back.
#23047
(
tavplubix
).
Fix error
Cannot find column in ActionsDAG result
which may happen if subquery uses
untuple
. Fixes
#22290
.
#22991
(
Nikolai Kochetov
).
Fix usage of constant columns of type
Map
with nullable values.
#22939
(
Anton Popov
).
fixed
formatDateTime()
on
DateTime64
and "%C" format specifier fixed
toDateTime64()
for large values and non-zero scale.
#22937
(
Vasily Nemkov
).
Fixed a crash when using
mannWhitneyUTest
and
rankCorr
with window functions. This fixes
#22728
.
#22876
(
Nikita Mikhaylov
).
LIVE VIEW (experimental feature): fixed possible hanging in concurrent DROP/CREATE of TEMPORARY LIVE VIEW in
TemporaryLiveViewCleaner
,
see
.
#22858
(
Vitaly Baranov
).
Fixed pushdown of
HAVING
in case, when filter column is used in aggregation.
#22763
(
Anton Popov
).
Fixed possible hangs in Zookeeper requests in case of OOM exception. Fixes
#22438
.
#22684
(
Nikolai Kochetov
).
Fixed wait for mutations on several replicas for ReplicatedMergeTree table engines. Previously, mutation/alter query may finish before mutation actually executed on other replicas.
#22669
(
alesapin
).
Fixed exception for Log with nested types without columns in the SELECT clause.
#22654
(
Azat Khuzhin
).
Fix unlimited wait for auxiliary AWS requests.
#22594
(
Vladimir Chebotarev
).
Fixed a crash when client closes connection very early
#22579
.
#22591
(
nvartolomei
).
Map
data type (experimental feature): fixed an incorrect formatting of function
map
in distributed queries.
#22588
(
foolchi
).
Fixed deserialization of empty string without newline at end of TSV format. This closes
#20244
. Possible workaround without version update: set
input_format_null_as_default
to zero. It was zero in old versions.
#22527
(
alexey-milovidov
).
Fixed wrong cast of a column of
LowCardinality
type in Merge Join algorithm. Close
#22386
, close
#22388
.
#22510
(
Vladimir
). | {"source_file": "2021.md"} | [
-0.022364726290106773,
-0.0461990050971508,
-0.012926421128213406,
0.05891847237944603,
-0.03780505433678627,
-0.06997697800397873,
0.0063720284961164,
-0.0024958245921880007,
-0.06694039702415466,
0.0717463418841362,
0.07063097506761551,
-0.039992742240428925,
0.07372257858514786,
-0.0590... |
eb598748-66c3-421e-b6d3-593e46722e0c | Fixed wrong cast of a column of
LowCardinality
type in Merge Join algorithm. Close
#22386
, close
#22388
.
#22510
(
Vladimir
).
Buffer overflow (on read) was possible in
tokenbf_v1
full text index. The excessive bytes are not used but the read operation may lead to crash in rare cases. This closes
#19233
.
#22421
(
alexey-milovidov
).
Do not limit HTTP chunk size. Fixes
#21907
.
#22322
(
Ivan
).
Fixed a bug, which leads to underaggregation of data in case of enabled
optimize_aggregation_in_order
and many parts in table. Slightly improve performance of aggregation with enabled
optimize_aggregation_in_order
.
#21889
(
Anton Popov
).
Check if table function view is used as a column. This complements #20350.
#21465
(
Amos Bird
).
Fix "unknown column" error for tables with
Merge
engine in queris with
JOIN
and aggregation. Closes
#18368
, close
#22226
.
#21370
(
Vladimir
).
Fixed name clashes in pushdown optimization. It caused incorrect
WHERE
filtration after FULL JOIN. Close
#20497
.
#20622
(
Vladimir
).
Fixed very rare bug when quorum insert with
quorum_parallel=1
is not really "quorum" because of deduplication.
#18215
(
filimonov
- reported,
alesapin
- fixed).
Build/Testing/Packaging Improvement {#buildtestingpackaging-improvement-6}
Run stateless tests in parallel in CI.
#22300
(
alesapin
).
Simplify debian packages. This fixes
#21698
.
#22976
(
alexey-milovidov
).
Added support for ClickHouse build on Apple M1.
#21639
(
changvvb
).
Fixed ClickHouse Keeper build for macOS.
#22860
(
alesapin
).
Fixed some tests on AArch64 platform.
#22596
(
alexey-milovidov
).
Added function alignment for possibly better performance.
#21431
(
Danila Kutenin
).
Adjust some tests to output identical results on amd64 and aarch64 (qemu). The result was depending on implementation specific CPU behaviour.
#22590
(
alexey-milovidov
).
Allow query profiling only on x86_64. See
#15174
and
#15638
. This closes
#15638
.
#22580
(
alexey-milovidov
).
Allow building with unbundled xz (lzma) using
USE_INTERNAL_XZ_LIBRARY=OFF
CMake option.
#22571
(
Kfir Itzhak
).
Enable bundled
openldap
on
ppc64le
#22487
(
Kfir Itzhak
).
Disable incompatible libraries (platform specific typically) on
ppc64le
#22475
(
Kfir Itzhak
).
Add Jepsen test in CI for clickhouse Keeper.
#22373
(
alesapin
).
Build
jemalloc
with support for
heap profiling
.
#22834
(
nvartolomei
).
Avoid UB in
*Log
engines for rwlock unlock due to unlock from another thread.
#22583
(
Azat Khuzhin
).
Fixed UB by unlocking the rwlock of the TinyLog from the same thread.
#22560
(
Azat Khuzhin
).
ClickHouse release 21.4 {#clickhouse-release-214}
ClickHouse release 21.4.1 2021-04-12 {#clickhouse-release-2141-2021-04-12}
Backward Incompatible Change {#backward-incompatible-change-7} | {"source_file": "2021.md"} | [
0.0006757535156793892,
-0.02415628544986248,
0.02205606922507286,
0.010044300928711891,
-0.034994449466466904,
-0.056685253977775574,
-0.00019518911722116172,
0.0800863578915596,
-0.03303113207221031,
0.013253102079033852,
-0.0039757066406309605,
0.03532762452960014,
-0.03251621499657631,
... |
aa29547c-fc52-471c-94a8-80d6d799bb58 | ClickHouse release 21.4 {#clickhouse-release-214}
ClickHouse release 21.4.1 2021-04-12 {#clickhouse-release-2141-2021-04-12}
Backward Incompatible Change {#backward-incompatible-change-7}
The
toStartOfIntervalFunction
will align hour intervals to the midnight (in previous versions they were aligned to the start of unix epoch). For example,
toStartOfInterval(x, INTERVAL 11 HOUR)
will split every day into three intervals:
00:00:00..10:59:59
,
11:00:00..21:59:59
and
22:00:00..23:59:59
. This behaviour is more suited for practical needs. This closes
#9510
.
#22060
(
alexey-milovidov
).
Age
and
Precision
in graphite rollup configs should increase from retention to retention. Now it's checked and the wrong config raises an exception.
#21496
(
Mikhail f. Shiryaev
).
Fix
cutToFirstSignificantSubdomainCustom()
/
firstSignificantSubdomainCustom()
returning wrong result for 3+ level domains present in custom top-level domain list. For input domains matching these custom top-level domains, the third-level domain was considered to be the first significant one. This is now fixed. This change may introduce incompatibility if the function is used in e.g. the sharding key.
#21946
(
Azat Khuzhin
).
Column
keys
in table
system.dictionaries
was replaced to columns
key.names
and
key.types
. Columns
key.names
,
key.types
,
attribute.names
,
attribute.types
from
system.dictionaries
table does not require dictionary to be loaded.
#21884
(
Maksim Kita
).
Now replicas that are processing the
ALTER TABLE ATTACH PART[ITION]
command search in their
detached/
folders before fetching the data from other replicas. As an implementation detail, a new command
ATTACH_PART
is introduced in the replicated log. Parts are searched and compared by their checksums.
#18978
(
Mike Kot
).
Note
:
ATTACH PART[ITION]
queries may not work during cluster upgrade.
It's not possible to rollback to older ClickHouse version after executing
ALTER ... ATTACH
query in new version as the old servers would fail to pass the
ATTACH_PART
entry in the replicated log.
In this version, empty
<remote_url_allow_hosts></remote_url_allow_hosts>
will block all access to remote hosts while in previous versions it did nothing. If you want to keep old behaviour and you have empty
remote_url_allow_hosts
element in configuration file, remove it.
#20058
(
Vladimir Chebotarev
).
New Feature {#new-feature-7}
Extended range of
DateTime64
to support dates from year 1925 to 2283. Improved support of
DateTime
around zero date (
1970-01-01
).
#9404
(
alexey-milovidov
,
Vasily Nemkov
). Not every time and date functions are working for extended range of dates.
Added support of Kerberos authentication for preconfigured users and HTTP requests (GSS-SPNEGO).
#14995
(
Denis Glazachev
). | {"source_file": "2021.md"} | [
-0.06496044993400574,
-0.020783454179763794,
-0.003513988107442856,
-0.038541585206985474,
-0.051100973039865494,
-0.012449747882783413,
-0.056607894599437714,
-0.008304579183459282,
-0.09882614016532898,
0.06380651146173477,
0.043960366398096085,
0.015124600380659103,
-0.0322766937315464,
... |
0daee0d7-1430-4556-9759-e86009207544 | Added support of Kerberos authentication for preconfigured users and HTTP requests (GSS-SPNEGO).
#14995
(
Denis Glazachev
).
Add
prefer_column_name_to_alias
setting to use original column names instead of aliases. it is needed to be more compatible with common databases' aliasing rules. This is for
#9715
and
#9887
.
#22044
(
Amos Bird
).
Added functions
dictGetChildren(dictionary, key)
,
dictGetDescendants(dictionary, key, level)
. Function
dictGetChildren
return all children as an array if indexes. It is a inverse transformation for
dictGetHierarchy
. Function
dictGetDescendants
return all descendants as if
dictGetChildren
was applied
level
times recursively. Zero
level
value is equivalent to infinity. Closes
#14656
.
#22096
(
Maksim Kita
).
Added
executable_pool
dictionary source. Close
#14528
.
#21321
(
Maksim Kita
).
Added table function
dictionary
. It works the same way as
Dictionary
engine. Closes
#21560
.
#21910
(
Maksim Kita
).
Support
Nullable
type for
PolygonDictionary
attribute.
#21890
(
Maksim Kita
).
Functions
dictGet
,
dictHas
use current database name if it is not specified for dictionaries created with DDL. Closes
#21632
.
#21859
(
Maksim Kita
).
Added function
dictGetOrNull
. It works like
dictGet
, but return
Null
in case key was not found in dictionary. Closes
#22375
.
#22413
(
Maksim Kita
).
Added async update in
ComplexKeyCache
,
SSDCache
,
SSDComplexKeyCache
dictionaries. Added support for
Nullable
type in
Cache
,
ComplexKeyCache
,
SSDCache
,
SSDComplexKeyCache
dictionaries. Added support for multiple attributes fetch with
dictGet
,
dictGetOrDefault
functions. Fixes
#21517
.
#20595
(
Maksim Kita
).
Support
dictHas
function for
RangeHashedDictionary
. Fixes
#6680
.
#19816
(
Maksim Kita
).
Add function
timezoneOf
that returns the timezone name of
DateTime
or
DateTime64
data types. This does not close
#9959
. Fix inconsistencies in function names: add aliases
timezone
and
timeZone
as well as
toTimezone
and
toTimeZone
and
timezoneOf
and
timeZoneOf
.
#22001
(
alexey-milovidov
).
Add new optional clause
GRANTEES
for
CREATE/ALTER USER
commands. It specifies users or roles which are allowed to receive grants from this user on condition this user has also all required access granted with grant option. By default
GRANTEES ANY
is used which means a user with grant option can grant to anyone. Syntax:
CREATE USER ... GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]
.
#21641
(
Vitaly Baranov
).
Add new column
slowdowns_count
to
system.clusters
. When using hedged requests, it shows how many times we switched to another replica because this replica was responding slowly. Also show actual value of
errors_count
in
system.clusters
.
#21480
(
Kruglov Pavel
). | {"source_file": "2021.md"} | [
-0.06672602146863937,
0.008714590221643448,
-0.106622613966465,
-0.054358381778001785,
-0.0716700628399849,
-0.1040630117058754,
0.028381403535604477,
0.03038192167878151,
-0.003851557383313775,
0.06021105498075485,
0.0400054007768631,
-0.06768976151943207,
0.04572521522641182,
-0.03140811... |
c19b2a10-b4e6-4ecb-bca4-2a94a7307747 | Add
_partition_id
virtual column for
MergeTree*
engines. Allow to prune partitions by
_partition_id
. Add
partitionID()
function to calculate partition id string.
#21401
(
Amos Bird
).
Add function
isIPAddressInRange
to test if an IPv4 or IPv6 address is contained in a given CIDR network prefix.
#21329
(
PHO
).
Added new SQL command
ALTER TABLE 'table_name' UNFREEZE [PARTITION 'part_expr'] WITH NAME 'backup_name'
. This command is needed to properly remove 'freezed' partitions from all disks.
#21142
(
Pavel Kovalenko
).
Supports implicit key type conversion for JOIN.
#19885
(
Vladimir
).
Experimental Feature {#experimental-feature-6}
Support
RANGE OFFSET
frame (for window functions) for floating point types. Implement
lagInFrame
/
leadInFrame
window functions, which are analogous to
lag
/
lead
, but respect the window frame. They are identical when the frame is
between unbounded preceding and unbounded following
. This closes
#5485
.
#21895
(
Alexander Kuzmenkov
).
Zero-copy replication for
ReplicatedMergeTree
over S3 storage.
#16240
(
ianton-ru
).
Added possibility to migrate existing S3 disk to the schema with backup-restore capabilities.
#22070
(
Pavel Kovalenko
).
Performance Improvement {#performance-improvement-7}
Supported parallel formatting in
clickhouse-local
and everywhere else.
#21630
(
Nikita Mikhaylov
).
Support parallel parsing for
CSVWithNames
and
TSVWithNames
formats. This closes
#21085
.
#21149
(
Nikita Mikhaylov
).
Enable read with mmap IO for file ranges from 64 MiB (the settings
min_bytes_to_use_mmap_io
). It may lead to moderate performance improvement.
#22326
(
alexey-milovidov
).
Add cache for files read with
min_bytes_to_use_mmap_io
setting. It makes significant (2x and more) performance improvement when the value of the setting is small by avoiding frequent mmap/munmap calls and the consequent page faults. Note that mmap IO has major drawbacks that makes it less reliable in production (e.g. hung or SIGBUS on faulty disks; less controllable memory usage). Nevertheless it is good in benchmarks.
#22206
(
alexey-milovidov
).
Avoid unnecessary data copy when using codec
NONE
. Please note that codec
NONE
is mostly useless - it's recommended to always use compression (
LZ4
is by default). Despite the common belief, disabling compression may not improve performance (the opposite effect is possible). The
NONE
codec is useful in some cases: - when data is uncompressable; - for synthetic benchmarks.
#22145
(
alexey-milovidov
).
Faster
GROUP BY
with small
max_rows_to_group_by
and
group_by_overflow_mode='any'
.
#21856
(
Nikolai Kochetov
).
Optimize performance of queries like
SELECT ... FINAL ... WHERE
. Now in queries with
FINAL
it's allowed to move to
PREWHERE
columns, which are in sorting key.
#21830
(
foolchi
). | {"source_file": "2021.md"} | [
-0.016639957204461098,
-0.028063517063856125,
-0.0336761549115181,
0.0005710229743272066,
0.019744662567973137,
-0.0035308522637933493,
-0.02288483828306198,
0.012120481580495834,
-0.020869549363851547,
-0.029071684926748276,
0.051463231444358826,
0.008733008988201618,
-0.04772887006402016,
... |
00c6a56f-3a05-44ba-8b2b-a377ade3a5bd | Optimize performance of queries like
SELECT ... FINAL ... WHERE
. Now in queries with
FINAL
it's allowed to move to
PREWHERE
columns, which are in sorting key.
#21830
(
foolchi
).
Improved performance by replacing
memcpy
to another implementation. This closes
#18583
.
#21520
(
alexey-milovidov
).
Improve performance of aggregation in order of sorting key (with enabled setting
optimize_aggregation_in_order
).
#19401
(
Anton Popov
).
Improvement {#improvement-7}
Add connection pool for PostgreSQL table/database engine and dictionary source. Should fix
#21444
.
#21839
(
Kseniia Sumarokova
).
Support non-default table schema for postgres storage/table-function. Closes
#21701
.
#21711
(
Kseniia Sumarokova
).
Support replicas priority for postgres dictionary source.
#21710
(
Kseniia Sumarokova
).
Introduce a new merge tree setting
min_bytes_to_rebalance_partition_over_jbod
which allows assigning new parts to different disks of a JBOD volume in a balanced way.
#16481
(
Amos Bird
).
Added
Grant
,
Revoke
and
System
values of
query_kind
column for corresponding queries in
system.query_log
.
#21102
(
Vasily Nemkov
).
Allow customizing timeouts for HTTP connections used for replication independently from other HTTP timeouts.
#20088
(
nvartolomei
).
Better exception message in client in case of exception while server is writing blocks. In previous versions client may get misleading message like
Data compressed with different methods
.
#22427
(
alexey-milovidov
).
Fix error
Directory tmp_fetch_XXX already exists
which could happen after failed fetch part. Delete temporary fetch directory if it already exists. Fixes
#14197
.
#22411
(
nvartolomei
).
Fix MSan report for function
range
with
UInt256
argument (support for large integers is experimental). This closes
#22157
.
#22387
(
alexey-milovidov
).
Add
current_database
column to
system.processes
table. It contains the current database of the query.
#22365
(
Alexander Kuzmenkov
).
Add case-insensitive history search/navigation and subword movement features to
clickhouse-client
.
#22105
(
Amos Bird
).
If tuple of NULLs, e.g.
(NULL, NULL)
is on the left hand side of
IN
operator with tuples of non-NULLs on the right hand side, e.g.
SELECT (NULL, NULL) IN ((0, 0), (3, 1))
return 0 instead of throwing an exception about incompatible types. The expression may also appear due to optimization of something like
SELECT (NULL, NULL) = (8, 0) OR (NULL, NULL) = (3, 2) OR (NULL, NULL) = (0, 0) OR (NULL, NULL) = (3, 1)
. This closes
#22017
.
#22063
(
alexey-milovidov
).
Update used version of simdjson to 0.9.1. This fixes
#21984
.
#22057
(
Vitaly Baranov
).
Added case insensitive aliases for
CONNECTION_ID()
and
VERSION()
functions. This fixes
#22028
.
#22042
(
Eugene Klimov
).
Add option
strict_increase
to
windowFunnel
function to calculate each event once (resolve
#21835
).
#22025
(
Vladimir
). | {"source_file": "2021.md"} | [
0.007561545353382826,
-0.0028629889711737633,
0.011942763812839985,
-0.028354940935969353,
-0.0756935104727745,
-0.05541020259261131,
-0.007771006319671869,
0.05454515665769577,
0.007682962343096733,
0.0773414671421051,
0.005606069695204496,
0.12268227338790894,
-0.025540975853800774,
-0.0... |
338c7d9e-4cb0-45bd-bac9-d8c3a80786c3 | Add option
strict_increase
to
windowFunnel
function to calculate each event once (resolve
#21835
).
#22025
(
Vladimir
).
If partition key of a
MergeTree
table does not include
Date
or
DateTime
columns but includes exactly one
DateTime64
column, expose its values in the
min_time
and
max_time
columns in
system.parts
and
system.parts_columns
tables. Add
min_time
and
max_time
columns to
system.parts_columns
table (these was inconsistency to the
system.parts
table). This closes
#18244
.
#22011
(
alexey-milovidov
).
Supported
replication_alter_partitions_sync=1
setting in
clickhouse-copier
for moving partitions from helping table to destination. Decreased default timeouts. Fixes
#21911
.
#21912
(
turbo jason
).
Show path to data directory of
EmbeddedRocksDB
tables in system tables.
#21903
(
tavplubix
).
Add profile event
HedgedRequestsChangeReplica
, change read data timeout from sec to ms.
#21886
(
Kruglov Pavel
).
DiskS3 (experimental feature under development). Fixed bug with the impossibility to move directory if the destination is not empty and cache disk is used.
#21837
(
Pavel Kovalenko
).
Better formatting for
Array
and
Map
data types in Web UI.
#21798
(
alexey-milovidov
).
Update clusters only if their configurations were updated.
#21685
(
Kruglov Pavel
).
Propagate query and session settings for distributed DDL queries. Set
distributed_ddl_entry_format_version
to 2 to enable this. Added
distributed_ddl_output_mode
setting. Supported modes:
none
,
throw
(default),
null_status_on_timeout
and
never_throw
. Miscellaneous fixes and improvements for
Replicated
database engine.
#21535
(
tavplubix
).
If
PODArray
was instantiated with element size that is neither a fraction or a multiple of 16, buffer overflow was possible. No bugs in current releases exist.
#21533
(
alexey-milovidov
).
Add
last_error_time
/
last_error_message
/
last_error_stacktrace
/
remote
columns for
system.errors
.
#21529
(
Azat Khuzhin
).
Add aliases
simpleJSONExtract/simpleJSONHas
to
visitParam/visitParamExtract{UInt, Int, Bool, Float, Raw, String}
. Fixes #21383.
#21519
(
fastio
).
Add setting
optimize_skip_unused_shards_limit
to limit the number of sharding key values for
optimize_skip_unused_shards
.
#21512
(
Azat Khuzhin
).
Improve
clickhouse-format
to not throw exception when there are extra spaces or comment after the last query, and throw exception early with readable message when format
ASTInsertQuery
with data .
#21311
(
flynn
).
Improve support of integer keys in data type
Map
.
#21157
(
Anton Popov
).
MaterializeMySQL: attempt to reconnect to MySQL if the connection is lost.
#20961
(
HΓ₯vard KvΓ₯len
).
Support more cases to rewrite
CROSS JOIN
to
INNER JOIN
.
#20392
(
Vladimir
).
Do not create empty parts on INSERT when
optimize_on_insert
setting enabled. Fixes
#20304
.
#20387
(
Kruglov Pavel
). | {"source_file": "2021.md"} | [
0.04070845618844032,
-0.068192258477211,
0.0044895377941429615,
0.032665278762578964,
-0.015164073556661606,
-0.07081359624862671,
-0.07603806257247925,
0.055158745497465134,
-0.03422810509800911,
-0.011357259005308151,
0.04977269843220711,
-0.017735878005623817,
0.012330444529652596,
-0.0... |
2604aa0e-1788-45bc-9da2-12739893330a | Do not create empty parts on INSERT when
optimize_on_insert
setting enabled. Fixes
#20304
.
#20387
(
Kruglov Pavel
).
MaterializeMySQL
: add minmax skipping index for
_version
column.
#20382
(
Stig Bakken
).
Add option
--backslash
for
clickhouse-format
, which can add a backslash at the end of each line of the formatted query.
#21494
(
flynn
).
Now clickhouse will not throw
LOGICAL_ERROR
exception when we try to mutate the already covered part. Fixes
#22013
.
#22291
(
alesapin
).
Bug Fix {#bug-fix-6}
Remove socket from epoll before cancelling packet receiver in
HedgedConnections
to prevent possible race. Fixes
#22161
.
#22443
(
Kruglov Pavel
).
Add (missing) memory accounting in parallel parsing routines. In previous versions OOM was possible when the resultset contains very large blocks of data. This closes
#22008
.
#22425
(
alexey-milovidov
).
Fix exception which may happen when
SELECT
has constant
WHERE
condition and source table has columns which names are digits.
#22270
(
LiuNeng
).
Fix query cancellation with
use_hedged_requests=0
and
async_socket_for_remote=1
.
#22183
(
Azat Khuzhin
).
Fix uncaught exception in
InterserverIOHTTPHandler
.
#22146
(
Azat Khuzhin
).
Fix docker entrypoint in case
http_port
is not in the config.
#22132
(
Ewout
).
Fix error
Invalid number of rows in Chunk
in
JOIN
with
TOTALS
and
arrayJoin
. Closes
#19303
.
#22129
(
Vladimir
).
Fix the background thread pool name which used to poll message from Kafka. The Kafka engine with the broken thread pool will not consume the message from message queue.
#22122
(
fastio
).
Fix waiting for
OPTIMIZE
and
ALTER
queries for
ReplicatedMergeTree
table engines. Now the query will not hang when the table was detached or restarted.
#22118
(
alesapin
).
Disable
async_socket_for_remote
/
use_hedged_requests
for buggy Linux kernels.
#22109
(
Azat Khuzhin
).
Docker entrypoint: avoid chown of
.
in case when
LOG_PATH
is empty. Closes
#22100
.
#22102
(
filimonov
).
The function
decrypt
was lacking a check for the minimal size of data encrypted in
AEAD
mode. This closes
#21897
.
#22064
(
alexey-milovidov
).
In rare case, merge for
CollapsingMergeTree
may create granule with
index_granularity + 1
rows. Because of this, internal check, added in
#18928
(affects 21.2 and 21.3), may fail with error
Incomplete granules are not allowed while blocks are granules size
. This error did not allow parts to merge.
#21976
(
Nikolai Kochetov
).
Reverted
#15454
that may cause significant increase in memory usage while loading external dictionaries of hashed type. This closes
#21935
.
#21948
(
Maksim Kita
).
Prevent hedged connections overlaps (
Unknown packet 9 from server
error).
#21941
(
Azat Khuzhin
).
Fix reading the HTTP POST request with "multipart/form-data" content type in some cases.
#21936
(
Ivan
). | {"source_file": "2021.md"} | [
0.00930514931678772,
-0.023482143878936768,
0.022942259907722473,
0.04171363264322281,
-0.028539251536130905,
-0.02838224731385708,
0.00211476837284863,
0.008341953158378601,
-0.08961395174264908,
0.06609673798084259,
0.053110674023628235,
-0.02801731787621975,
0.06497686356306076,
-0.0693... |
77920d6d-6864-4d8f-ab9a-76b5eb2f1ba3 | Fix reading the HTTP POST request with "multipart/form-data" content type in some cases.
#21936
(
Ivan
).
Fix wrong
ORDER BY
results when a query contains window functions, and optimization for reading in primary key order is applied. Fixes
#21828
.
#21915
(
Alexander Kuzmenkov
).
Fix deadlock in first catboost model execution. Closes
#13832
.
#21844
(
Kruglov Pavel
).
Fix incorrect query result (and possible crash) which could happen when
WHERE
or
HAVING
condition is pushed before
GROUP BY
. Fixes
#21773
.
#21841
(
Nikolai Kochetov
).
Better error handling and logging in
WriteBufferFromS3
.
#21836
(
Pavel Kovalenko
).
Fix possible crashes in aggregate functions with combinator
Distinct
, while using two-level aggregation. This is a follow-up fix of
#18365
. Can only reproduced in production env.
#21818
(
Amos Bird
).
Fix scalar subquery index analysis. This fixes
#21717
, which was introduced in
#18896
.
#21766
(
Amos Bird
).
Fix bug for
ReplicatedMerge
table engines when
ALTER MODIFY COLUMN
query doesn't change the type of
Decimal
column if its size (32 bit or 64 bit) doesn't change.
#21728
(
alesapin
).
Fix possible infinite waiting when concurrent
OPTIMIZE
and
DROP
are run for
ReplicatedMergeTree
.
#21716
(
Azat Khuzhin
).
Fix function
arrayElement
with type
Map
for constant integer arguments.
#21699
(
Anton Popov
).
Fix SIGSEGV on not existing attributes from
ip_trie
with
access_to_key_from_attributes
.
#21692
(
Azat Khuzhin
).
Server now start accepting connections only after
DDLWorker
and dictionaries initialization.
#21676
(
Azat Khuzhin
).
Add type conversion for keys of tables of type
Join
(previously led to SIGSEGV).
#21646
(
Azat Khuzhin
).
Fix distributed requests cancellation (for example simple select from multiple shards with limit, i.e.
select * from remote('127.{2,3}', system.numbers) limit 100
) with
async_socket_for_remote=1
.
#21643
(
Azat Khuzhin
).
Fix
fsync_part_directory
for horizontal merge.
#21642
(
Azat Khuzhin
).
Remove unknown columns from joined table in
WHERE
for queries to external database engines (MySQL, PostgreSQL). close
#14614
, close
#19288
(dup), close
#19645
(dup).
#21640
(
Vladimir
).
std::terminate
was called if there is an error writing data into s3.
#21624
(
Vladimir
).
Fix possible error
Cannot find column
when
optimize_skip_unused_shards
is enabled and zero shards are used.
#21579
(
Azat Khuzhin
).
In case if query has constant
WHERE
condition, and setting
optimize_skip_unused_shards
enabled, all shards may be skipped and query could return incorrect empty result.
#21550
(
Amos Bird
).
Fix table function
clusterAllReplicas
returns wrong
_shard_num
. close
#21481
.
#21498
(
flynn
).
Fix that S3 table holds old credentials after config update.
#21457
(
Grigory Pervakov
). | {"source_file": "2021.md"} | [
-0.07106442004442215,
-0.024333812296390533,
0.044388849288225174,
0.03422713279724121,
-0.07628912478685379,
-0.04323645681142807,
-0.0825144574046135,
0.0013369225198403,
-0.008963252417743206,
0.04549597203731537,
-0.022478630766272545,
0.014429784379899502,
-0.00580701744183898,
-0.085... |
0dea6552-04e4-4154-9c97-645db1e6dfe1 | Fix table function
clusterAllReplicas
returns wrong
_shard_num
. close
#21481
.
#21498
(
flynn
).
Fix that S3 table holds old credentials after config update.
#21457
(
Grigory Pervakov
).
Fixed race on SSL object inside
SecureSocket
in Poco.
#21456
(
Nikita Mikhaylov
).
Fix
Avro
format parsing for
Kafka
. Fixes
#21437
.
#21438
(
Ilya Golshtein
).
Fix receive and send timeouts and non-blocking read in secure socket.
#21429
(
Kruglov Pavel
).
force_drop_table
flag didn't work for
MATERIALIZED VIEW
, it's fixed. Fixes
#18943
.
#20626
(
tavplubix
).
Fix name clashes in
PredicateRewriteVisitor
. It caused incorrect
WHERE
filtration after full join. Close
#20497
.
#20622
(
Vladimir
).
Build/Testing/Packaging Improvement {#buildtestingpackaging-improvement-7}
Add
Jepsen
tests for ClickHouse Keeper.
#21677
(
alesapin
).
Run stateless tests in parallel in CI. Depends on
#22181
.
#22300
(
alesapin
).
Enable status check for
SQLancer
CI run.
#22015
(
Ilya Yatsishin
).
Multiple preparations for PowerPC builds: Enable the bundled openldap on
ppc64le
.
#22487
(
Kfir Itzhak
). Enable compiling on
ppc64le
with Clang.
#22476
(
Kfir Itzhak
). Fix compiling boost on
ppc64le
.
#22474
(
Kfir Itzhak
). Fix CMake error about internal CMake variable
CMAKE_ASM_COMPILE_OBJECT
not set on
ppc64le
.
#22469
(
Kfir Itzhak
). Fix Fedora/RHEL/CentOS not finding
libclang_rt.builtins
on
ppc64le
.
#22458
(
Kfir Itzhak
). Enable building with
jemalloc
on
ppc64le
.
#22447
(
Kfir Itzhak
). Fix ClickHouse's config embedding and cctz's timezone embedding on
ppc64le
.
#22445
(
Kfir Itzhak
). Fixed compiling on
ppc64le
and use the correct instruction pointer register on
ppc64le
.
#22430
(
Kfir Itzhak
).
Re-enable the S3 (AWS) library on
aarch64
.
#22484
(
Kfir Itzhak
).
Add
tzdata
to Docker containers because reading
ORC
formats requires it. This closes
#14156
.
#22000
(
alexey-milovidov
).
Introduce 2 arguments for
clickhouse-server
image Dockerfile:
deb_location
&
single_binary_location
.
#21977
(
filimonov
).
Allow to use clang-tidy with release builds by enabling assertions if it is used.
#21914
(
alexey-milovidov
).
Add llvm-12 binaries name to search in cmake scripts. Implicit constants conversions to mute clang warnings. Updated submodules to build with CMake 3.19. Mute recursion in macro expansion in
readpassphrase
library. Deprecated
-fuse-ld
changed to
--ld-path
for clang.
#21597
(
Ilya Yatsishin
).
Updating
docker/test/testflows/runner/dockerd-entrypoint.sh
to use Yandex dockerhub-proxy, because Docker Hub has enabled very restrictive rate limits
#21551
(
vzakaznikov
).
Fix macOS shared lib build.
#20184
(
nvartolomei
).
Add
ctime
option to
zookeeper-dump-tree
. It allows to dump node creation time.
#21842
(
Ilya
).
ClickHouse release 21.3 (LTS) {#clickhouse-release-213-lts} | {"source_file": "2021.md"} | [
-0.04147229716181755,
-0.07107339054346085,
-0.06252612918615341,
0.015194234438240528,
-0.00015932197857182473,
-0.016954872757196426,
-0.038986239582300186,
-0.04347950220108032,
-0.008342570625245571,
0.04461260885000229,
0.03227628394961357,
-0.06252523511648178,
0.03405529633164406,
-... |
ae5b04a7-ddad-4c9c-8fea-91e29dc9de30 | Add
ctime
option to
zookeeper-dump-tree
. It allows to dump node creation time.
#21842
(
Ilya
).
ClickHouse release 21.3 (LTS) {#clickhouse-release-213-lts}
ClickHouse release v21.3, 2021-03-12 {#clickhouse-release-v213-2021-03-12}
Backward Incompatible Change {#backward-incompatible-change-8}
Now it's not allowed to create MergeTree tables in old syntax with table TTL because it's just ignored. Attach of old tables is still possible.
#20282
(
alesapin
).
Now all case-insensitive function names will be rewritten to their canonical representations. This is needed for projection query routing (the upcoming feature).
#20174
(
Amos Bird
).
Fix creation of
TTL
in cases, when its expression is a function and it is the same as
ORDER BY
key. Now it's allowed to set custom aggregation to primary key columns in
TTL
with
GROUP BY
. Backward incompatible: For primary key columns, which are not in
GROUP BY
and aren't set explicitly now is applied function
any
instead of
max
, when TTL is expired. Also if you use TTL with
WHERE
or
GROUP BY
you can see exceptions at merges, while making rolling update.
#15450
(
Anton Popov
).
New Feature {#new-feature-8}
Add file engine settings:
engine_file_empty_if_not_exists
and
engine_file_truncate_on_insert
.
#20620
(
M0r64n
).
Add aggregate function
deltaSum
for summing the differences between consecutive rows.
#20057
(
Russ Frank
).
New
event_time_microseconds
column in
system.part_log
table.
#20027
(
Bharat Nallan
).
Added
timezoneOffset(datetime)
function which will give the offset from UTC in seconds. This close
#issue:19850
.
#19962
(
keenwolf
).
Add setting
insert_shard_id
to support insert data into specific shard from distributed table.
#19961
(
flynn
).
Function
reinterpretAs
updated to support big integers. Fixes
#19691
.
#19858
(
Maksim Kita
).
Added Server Side Encryption Customer Keys (the
x-amz-server-side-encryption-customer-(key/md5)
header) support in S3 client. See
the link
. Closes
#19428
.
#19748
(
Vladimir Chebotarev
).
Added
implicit_key
option for
executable
dictionary source. It allows to avoid printing key for every record if records comes in the same order as the input keys. Implements
#14527
.
#19677
(
Maksim Kita
).
Add quota type
query_selects
and
query_inserts
.
#19603
(
JackyWoo
).
Add function
extractTextFromHTML
#19600
(
zlx19950903
), (
alexey-milovidov
).
Tables with
MergeTree*
engine now have two new table-level settings for query concurrency control. Setting
max_concurrent_queries
limits the number of concurrently executed queries which are related to this table. Setting
min_marks_to_honor_max_concurrent_queries
tells to apply previous setting only if query reads at least this number of marks.
#19544
(
Amos Bird
). | {"source_file": "2021.md"} | [
0.01914634183049202,
-0.027438441291451454,
0.052373483777046204,
-0.031088273972272873,
-0.037396110594272614,
-0.054144237190485,
-0.04231635481119156,
0.044097304344177246,
-0.026176074519753456,
0.08637431263923645,
0.011922062374651432,
-0.022174756973981857,
-0.00954562146216631,
0.0... |
148ce6bd-5074-459d-9c0f-389cd18fca26 | Added
file
function to read file from user_files directory as a String. This is different from the
file
table function. This implements
#issue:18851
.
#19204
(
keenwolf
).
Experimental feature {#experimental-feature-7}
Add experimental
Replicated
database engine. It replicates DDL queries across multiple hosts.
#16193
(
tavplubix
).
Introduce experimental support for window functions, enabled with
allow_experimental_window_functions = 1
. This is a preliminary, alpha-quality implementation that is not suitable for production use and will change in backward-incompatible ways in future releases. Please see
the documentation
for the list of supported features.
#20337
(
Alexander Kuzmenkov
).
Add the ability to backup/restore metadata files for DiskS3.
#18377
(
Pavel Kovalenko
).
Performance Improvement {#performance-improvement-8}
Hedged requests for remote queries. When setting
use_hedged_requests
enabled (off by default), allow to establish many connections with different replicas for query. New connection is enabled in case existent connection(s) with replica(s) were not established within
hedged_connection_timeout
or no data was received within
receive_data_timeout
. Query uses the first connection which send non empty progress packet (or data packet, if
allow_changing_replica_until_first_data_packet
); other connections are cancelled. Queries with
max_parallel_replicas > 1
are supported.
#19291
(
Kruglov Pavel
). This allows to significantly reduce tail latencies on very large clusters.
Added support for
PREWHERE
(and enable the corresponding optimization) when tables have row-level security expressions specified.
#19576
(
Denis Glazachev
).
The setting
distributed_aggregation_memory_efficient
is enabled by default. It will lower memory usage and improve performance of distributed queries.
#20599
(
alexey-milovidov
).
Improve performance of GROUP BY multiple fixed size keys.
#20472
(
alexey-milovidov
).
Improve performance of aggregate functions by more strict aliasing.
#19946
(
alexey-milovidov
).
Speed up reading from
Memory
tables in extreme cases (when reading speed is in order of 50 GB/sec) by simplification of pipeline and (consequently) less lock contention in pipeline scheduling.
#20468
(
alexey-milovidov
).
Partially reimplement HTTP server to make it making less copies of incoming and outgoing data. It gives up to 1.5 performance improvement on inserting long records over HTTP.
#19516
(
Ivan
).
Add
compress
setting for
Memory
tables. If it's enabled the table will use less RAM. On some machines and datasets it can also work faster on SELECT, but it is not always the case. This closes
#20093
. Note: there are reasons why Memory tables can work slower than MergeTree: (1) lack of compression (2) static size of blocks (3) lack of indices and prewhere...
#20168
(
alexey-milovidov
).
Slightly better code in aggregation.
#20978
(
alexey-milovidov
). | {"source_file": "2021.md"} | [
-0.06436183303594589,
-0.07378163933753967,
-0.06335136294364929,
-0.0024947719648480415,
-0.041987646371126175,
-0.10229112952947617,
-0.013006269000470638,
0.04639454558491707,
-0.05847049504518509,
0.044852640479803085,
-0.0052971127443015575,
0.11668822914361954,
0.024850856512784958,
... |
d7f781d0-7d46-495d-96b9-8a1458ec2b66 | Slightly better code in aggregation.
#20978
(
alexey-milovidov
).
Add back
intDiv
/
modulo
specializations for better performance. This fixes
#21293
. The regression was introduced in https://github.com/ClickHouse/ClickHouse/pull/18145 .
#21307
(
Amos Bird
).
Do not squash blocks too much on INSERT SELECT if inserting into Memory table. In previous versions inefficient data representation was created in Memory table after INSERT SELECT. This closes
#13052
.
#20169
(
alexey-milovidov
).
Fix at least one case when DataType parser may have exponential complexity (found by fuzzer). This closes
#20096
.
#20132
(
alexey-milovidov
).
Parallelize SELECT with FINAL for single part with level > 0 when
do_not_merge_across_partitions_select_final
setting is 1.
#19375
(
Kruglov Pavel
).
Fill only requested columns when querying
system.parts
and
system.parts_columns
. Closes
#19570
.
#21035
(
Anmol Arora
).
Perform algebraic optimizations of arithmetic expressions inside
avg
aggregate function. close
#20092
.
#20183
(
flynn
).
Improvement {#improvement-8}
Case-insensitive compression methods for table functions. Also fixed LZMA compression method which was checked in upper case.
#21416
(
Vladimir Chebotarev
).
Add two settings to delay or throw error during insertion when there are too many inactive parts. This is useful when server fails to clean up parts quickly enough.
#20178
(
Amos Bird
).
Provide better compatibility for mysql clients. 1. mysql jdbc 2. mycli.
#21367
(
Amos Bird
).
Forbid to drop a column if it's referenced by materialized view. Closes
#21164
.
#21303
(
flynn
).
MySQL dictionary source will now retry unexpected connection failures (Lost connection to MySQL server during query) which sometimes happen on SSL/TLS connections.
#21237
(
Alexander Kazakov
).
Usability improvement: more consistent
DateTime64
parsing: recognize the case when unix timestamp with subsecond resolution is specified as scaled integer (like
1111111111222
instead of
1111111111.222
). This closes
#13194
.
#21053
(
alexey-milovidov
).
Do only merging of sorted blocks on initiator with distributed_group_by_no_merge.
#20882
(
Azat Khuzhin
).
When loading config for mysql source ClickHouse will now randomize the list of replicas with the same priority to ensure the round-robin logics of picking mysql endpoint. This closes
#20629
.
#20632
(
Alexander Kazakov
).
Function 'reinterpretAs(x, Type)' renamed into 'reinterpret(x, Type)'.
#20611
(
Maksim Kita
).
Support vhost for RabbitMQ engine
#20576
.
#20596
(
Kseniia Sumarokova
).
Improved serialization for data types combined of Arrays and Tuples. Improved matching enum data types to protobuf enum type. Fixed serialization of the
Map
data type. Omitted values are now set by default.
#20506
(
Vitaly Baranov
). | {"source_file": "2021.md"} | [
-0.045524366199970245,
-0.028525197878479958,
0.05199619010090828,
0.05900279060006142,
-0.004530181176960468,
-0.11745363473892212,
-0.0210455022752285,
0.04108213260769844,
-0.011340148746967316,
-0.007097604684531689,
-0.00016326858894899487,
-0.0014083872083574533,
-0.01607566885650158,
... |
a37d2403-cb8b-4a3d-9659-46e11e0a655c | Fixed race between execution of distributed DDL tasks and cleanup of DDL queue. Now DDL task cannot be removed from ZooKeeper if there are active workers. Fixes
#20016
.
#20448
(
tavplubix
).
Make FQDN and other DNS related functions work correctly in alpine images.
#20336
(
filimonov
).
Do not allow early constant folding of explicitly forbidden functions.
#20303
(
Azat Khuzhin
).
Implicit conversion from integer to Decimal type might succeeded if integer value doe not fit into Decimal type. Now it throws
ARGUMENT_OUT_OF_BOUND
.
#20232
(
tavplubix
).
Lockless
SYSTEM FLUSH DISTRIBUTED
.
#20215
(
Azat Khuzhin
).
Normalize count(constant), sum(1) to count(). This is needed for projection query routing.
#20175
(
Amos Bird
).
Support all native integer types in bitmap functions.
#20171
(
Amos Bird
).
Updated
CacheDictionary
,
ComplexCacheDictionary
,
SSDCacheDictionary
,
SSDComplexKeyDictionary
to use LRUHashMap as underlying index.
#20164
(
Maksim Kita
).
The setting
access_management
is now configurable on startup by providing
CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT
, defaults to disabled (
0
) which was the prior value.
#20139
(
Marquitos
).
Fix toDateTime64(toDate()/toDateTime()) for DateTime64 - Implement DateTime64 clamping to match DateTime behaviour.
#20131
(
Azat Khuzhin
).
Quota improvements: SHOW TABLES is now considered as one query in the quota calculations, not two queries. SYSTEM queries now consume quota. Fix calculation of interval's end in quota consumption.
#20106
(
Vitaly Baranov
).
Supports
path IN (set)
expressions for
system.zookeeper
table.
#20105
(
ε°θ·―
).
Show full details of
MaterializeMySQL
tables in
system.tables
.
#20051
(
Stig Bakken
).
Fix data race in executable dictionary that was possible only on misuse (when the script returns data ignoring its input).
#20045
(
alexey-milovidov
).
The value of MYSQL_OPT_RECONNECT option can now be controlled by "opt_reconnect" parameter in the config section of mysql replica.
#19998
(
Alexander Kazakov
).
If user calls
JSONExtract
function with
Float32
type requested, allow inaccurate conversion to the result type. For example the number
0.1
in JSON is double precision and is not representable in Float32, but the user still wants to get it. Previous versions return 0 for non-Nullable type and NULL for Nullable type to indicate that conversion is imprecise. The logic was 100% correct but it was surprising to users and leading to questions. This closes
#13962
.
#19960
(
alexey-milovidov
).
Add conversion of block structure for INSERT into Distributed tables if it does not match.
#19947
(
Azat Khuzhin
).
Improvement for the
system.distributed_ddl_queue
table. Initialize MaxDDLEntryID to the last value after restarting. Before this PR, MaxDDLEntryID will remain zero until a new DDLTask is processed.
#19924
(
Amos Bird
). | {"source_file": "2021.md"} | [
-0.04196050763130188,
0.03462604805827141,
0.018238013610243797,
-0.01730252616107464,
-0.01828552968800068,
-0.09771894663572311,
-0.018382109701633453,
-0.08559759706258774,
-0.007884305901825428,
0.04438686743378639,
-0.10284174233675003,
-0.037354860454797745,
-0.008053950034081936,
0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.