owner stringclasses 14
values | repo stringclasses 14
values | id int64 116k 4.22B | issue_number int32 1 180k | author stringlengths 0 39 | body stringlengths 1 262k | created_at timestamp[us]date 2000-06-06 02:40:44 2026-04-10 02:50:59 | updated_at timestamp[us]date 2000-06-06 02:40:44 2026-04-10 02:51:18 | reactions unknown | author_association stringclasses 7
values |
|---|---|---|---|---|---|---|---|---|---|
ClickHouse | ClickHouse | 280,673,247 | 505 | ilyas-pro | Ok, here's another examples. Seems like toStartOfHour function ignores timezone parameter.
**Eample 1:** _toStartOfMonth works with tz correctly, but toStartOfHour is not_
SELECT
toStartOfMonth(toDateTime(1483210000), 'Asia/Tokyo'),
toStartOfHour(toDateTime(1483210000), 'Asia/Tokyo'),
toString(toSta... | 2017-02-17T15:02:45 | 2017-02-17T15:09:44 | {} | NONE |
ClickHouse | ClickHouse | 282,099,065 | 522 | alexey-milovidov | In distributed setup, intermediate states of aggregate function need to be serialized and transferred over the network. Performance is dependent on size of intermediate data and network speed.
Try to use different aggretate functions. For example, `uniqCombined` have smaller state and it is also usually more precise... | 2017-02-23T19:44:15 | 2017-02-23T19:44:35 | {} | MEMBER |
ClickHouse | ClickHouse | 282,573,547 | 499 | robot-metrika-test | Can one of the admins verify this patch? | 2017-02-26T17:53:02 | 2017-02-26T17:53:02 | {} | NONE |
ClickHouse | ClickHouse | 283,335,404 | 538 | ztlpn | You can try splitting the file into several smaller files and inserting them in parallel. This should speedup insertion somewhat. | 2017-03-01T13:10:01 | 2017-03-01T13:10:01 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,439,253 | 528 | ztlpn | Here is a way to do it in ClickHouse:
```sql
:) SELECT x, y1, y2 FROM (SELECT x, y AS y1 FROM t1) ALL INNER JOIN (SELECT x, y AS y2 FROM t2) USING (x)
SELECT
x,
y1,
y2
FROM
(
SELECT
x,
y AS y1
FROM t1
)
ALL INNER JOIN
(
SELECT
x,
y AS y2
F... | 2017-03-01T19:13:52 | 2017-03-01T19:13:52 | {
"+1": 2
} | CONTRIBUTOR |
ClickHouse | ClickHouse | 284,100,297 | 542 | alexey-milovidov | > the problem of the slowdown is probably connected with some internal mutex
You could check this easily.
First check `SELECT * FROM system.merges`. Do you see high number of running merges?
Run `top`. You will see high CPU or high disk usage.
If you see high CPU usage, run `sudo perf top` (requires to instal... | 2017-03-03T23:35:10 | 2017-03-03T23:35:10 | {
"+1": 2
} | MEMBER |
ClickHouse | ClickHouse | 284,121,746 | 538 | sangli00 | @ztlpn If i used Native interface and parallel write to ClicHouse is OK? | 2017-03-04T02:54:36 | 2017-03-04T02:54:36 | {} | NONE |
ClickHouse | ClickHouse | 280,638,507 | 506 | sangli00 | But now,ClickHouse is support ```exists```? | 2017-02-17T12:38:31 | 2017-02-17T12:38:31 | {} | NONE |
ClickHouse | ClickHouse | 280,640,708 | 506 | sangli00 | could you give me about ClickHouse SQL diff ANSI SQL documents? | 2017-02-17T12:50:38 | 2017-02-17T12:50:38 | {} | NONE |
ClickHouse | ClickHouse | 280,664,234 | 503 | sangli00 | yes, I remove last ```|```
but I can't replace ```|``` to ```\t```
used TabSeparated FORMAT is OK.
Thanks.
| 2017-02-17T14:29:16 | 2017-02-17T14:29:16 | {} | NONE |
ClickHouse | ClickHouse | 282,636,440 | 528 | sangli00 | ```
:] select x,y,yy from (select x,y,y as yy from t1) as t1 any inner join (select x,y from t2) as t2 using x;
SELECT
x,
y,
yy
FROM
(
SELECT
x,
y,
y AS yy
FROM t1
) AS t1
ANY INNER JOIN
(
SELECT
x,
y
FROM t2
) AS t... | 2017-02-27T06:09:33 | 2017-02-27T06:09:33 | {} | NONE |
ClickHouse | ClickHouse | 282,854,595 | 465 | proller | https://github.com/yandex/ClickHouse/commit/57c336f267bbf049f9ffdf87e50fe5f2cded4882
for use with self-signed https servers use config option <https_client_insecure>1</https_client_insecure> | 2017-02-27T21:11:25 | 2017-02-27T21:11:25 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 282,925,259 | 527 | sangli00 | why copy csv to Clickhouse ,everyone columns is blank? | 2017-02-28T02:43:57 | 2017-02-28T02:43:57 | {} | NONE |
ClickHouse | ClickHouse | 283,047,524 | 534 | proller | 1. make install require sudo
2. now make install not used and install some garbage files from contrib | 2017-02-28T14:06:25 | 2017-02-28T14:06:25 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,430,367 | 525 | ztlpn | Yes, ClickHouse uses LZ4 compression algorithm for compressing column data. Optionally you can configure it so that ZSTD algorithm is chosen (see [comments in config.xml](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Server/config.xml#L158)) | 2017-03-01T18:42:50 | 2017-03-01T18:42:50 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 284,063,563 | 553 | alexey-milovidov | Respon**s**e | 2017-03-03T20:35:51 | 2017-03-03T20:35:51 | {} | MEMBER |
ClickHouse | ClickHouse | 280,665,044 | 506 | alexey-milovidov | Nevertheless, you could run modified version of TPC-H benchmark, if you rewrite queries and possibly replace most of dimension tables with [external dictionaries](https://clickhouse.yandex/reference_en.html#External%20dictionaries). | 2017-02-17T14:32:11 | 2017-02-17T14:32:11 | {} | MEMBER |
ClickHouse | ClickHouse | 282,711,021 | 508 | DeamonMV | I found issue. o forgot correct <listen_host>192.168.xxx.xxx</listen_host>.
But i got another issue. I run select count() from dbclick.events (its a my distributed table)
i got always different value
| 2017-02-27T12:51:39 | 2017-02-27T12:51:39 | {
"+1": 1
} | NONE |
ClickHouse | ClickHouse | 283,067,385 | 534 | mfridental | Maybe it can be better solved it "make all" would also link executables? Our problem was that clickhouse-client will not be built after "make all" and we've spent half a day trying to understand what we do wrong. | 2017-02-28T15:18:06 | 2017-02-28T15:18:06 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,794,256 | 549 | alexey-milovidov | autotests | 2017-03-02T21:53:13 | 2017-03-02T21:53:13 | {} | MEMBER |
ClickHouse | ClickHouse | 283,998,624 | 542 | alexey-milovidov | You could adjust size of thread pool for background operations.
It could be set in `users.xml` in `background_pool_size` in `/profiles/default/`.
Default value is 16. Lowering the size of this pool allows you to limit maximum CPU and disk usage.
But don't set it too low.
Also you could adjust some fine settings f... | 2017-03-03T16:21:29 | 2017-03-03T16:21:29 | {
"+1": 2
} | MEMBER |
ClickHouse | ClickHouse | 284,101,773 | 553 | proller | fixed | 2017-03-03T23:44:49 | 2017-03-03T23:44:49 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 280,640,363 | 506 | sangli00 | If i used ClickHouse test TPCH is OK? | 2017-02-17T12:48:49 | 2017-02-17T12:48:49 | {} | NONE |
ClickHouse | ClickHouse | 281,176,099 | 514 | ludv1x | autotests | 2017-02-20T20:37:17 | 2017-02-20T20:37:17 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 281,300,334 | 467 | ludv1x | What doesn't precisely work?
Could you provide your configuration and queries? | 2017-02-21T10:11:34 | 2017-02-21T10:11:34 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 282,573,076 | 499 | alexey-milovidov | Fixed in another pull request. | 2017-02-26T17:46:21 | 2017-02-26T17:46:21 | {} | MEMBER |
ClickHouse | ClickHouse | 283,441,422 | 532 | ztlpn | You have to join tables pairwise. Also, this means you have to manually choose join order:
```sql
:) SELECT x, y1, y2, y3 FROM (SELECT x, y AS y3 FROM t3) ALL INNER JOIN (SELECT x, y1, y2 FROM (SELECT x, y AS y1 FROM t1) ALL INNER JOIN (SELECT x, y AS y2 FROM t2) USING (x)) USING (x)
SELECT
x,
y1,
y... | 2017-03-01T19:21:42 | 2017-03-01T19:21:42 | {
"+1": 7,
"confused": 8,
"eyes": 1
} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,921,069 | 532 | ztlpn | Indeed, joining many tables is currently not very convenient but there are plans to improve the join syntax.
Also note that if many joins are necessary because your schema is some variant of the [star schema](https://en.wikipedia.org/wiki/Star_schema) and you need to join dimension tables to the fact table, then in ... | 2017-03-03T10:34:04 | 2017-03-03T10:34:04 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 281,054,265 | 512 | ztlpn | Confirmed, thanks for reporting.
Another workaround: restart the server. | 2017-02-20T11:22:26 | 2017-02-20T11:22:26 | {
"+1": 1
} | CONTRIBUTOR |
ClickHouse | ClickHouse | 281,079,983 | 511 | ludv1x | We removed libmongodb dependency at December. Now we use Poco library (which is built from sources by default) for external MongoDB dictionaries.
Remaining mentions of ENABLE_MONGODB variable in ClickHouse's sources don't make sense. | 2017-02-20T13:31:58 | 2017-02-20T13:31:58 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 281,272,728 | 467 | VictoryWangCN | @ludv1x but it's not work... | 2017-02-21T08:09:23 | 2017-02-21T08:09:23 | {} | NONE |
ClickHouse | ClickHouse | 282,783,732 | 529 | ztlpn | Short answer: currently there is no extension support in ClickHouse so the only way to define a "UDF" is to implement it as a built-in function and open the pull request to merge the code into the main ClickHouse repository. See issue #11 | 2017-02-27T17:08:39 | 2017-02-27T17:08:39 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,044,347 | 534 | robot-metrika-test | Can one of the admins verify this patch? | 2017-02-28T13:53:02 | 2017-02-28T13:53:02 | {} | NONE |
ClickHouse | ClickHouse | 283,206,619 | 537 | alexey-milovidov | Thanks! Almost Ok. Just few little changes to do. | 2017-03-01T00:33:08 | 2017-03-01T00:33:08 | {} | MEMBER |
ClickHouse | ClickHouse | 283,705,010 | 538 | vavrusa | You can break it by parts and insert in parallel. | 2017-03-02T16:35:15 | 2017-03-02T16:35:15 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,711,310 | 546 | alpswjc | Alex Zatelepin has confirmed its a bug | 2017-03-02T16:55:34 | 2017-03-02T16:55:34 | {} | NONE |
ClickHouse | ClickHouse | 283,027,603 | 467 | VictoryWangCN | it works.
I highly appreciate your help, thanks. | 2017-02-28T12:34:33 | 2017-02-28T12:35:22 | {} | NONE |
ClickHouse | ClickHouse | 283,679,728 | 527 | alexey-milovidov | It is likely, you are using tables of `Log` or `TinyLog` engines. These kind of tables are intended for simple write-once scenarios. They are using read-write lock and INSERT SELECT to the same table is impossible.
As mentioned in documentation, "Note that for most serious tasks, you should use engines from the Merg... | 2017-03-02T15:11:18 | 2017-03-02T15:11:18 | {} | MEMBER |
ClickHouse | ClickHouse | 283,787,455 | 549 | robot-metrika-test | Can one of the admins verify this patch? | 2017-03-02T21:28:02 | 2017-03-02T21:28:02 | {} | NONE |
ClickHouse | ClickHouse | 284,026,481 | 538 | ztlpn | > If i insert 1TB data to Clickhouse,How to do it?
Inserting 1TB shouldn't be a problem. But in your case performance is indeed suspiciously low: ~16MB/s (should be ~100MB/s), and as the output of `time` suggests, the client is the bottleneck. Could you provide some info on the specs of the machine you are running t... | 2017-03-03T18:04:58 | 2017-03-03T18:04:58 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 281,126,047 | 514 | robot-metrika-test | Can one of the admins verify this patch? | 2017-02-20T16:38:02 | 2017-02-20T16:38:02 | {} | NONE |
ClickHouse | ClickHouse | 281,460,753 | 518 | proller | usage:
```
cd ~
git clone --recursive --depth 1 -b poco-1.7.8 https://github.com/pocoproject/poco.git
git clone --recursive --depth 1 https://github.com/yandex/ClickHouse.git
cd poco ... | 2017-02-21T19:54:15 | 2017-02-21T19:54:26 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 282,802,122 | 530 | ztlpn | A similar idea goes by the name "range index" or "min-max index" (https://en.wikipedia.org/wiki/Block_Range_Index). It is not even necessary for the indexed column to be part of the primary key - the data just has to be stored in such a way that min-max ranges do not overlap much for different blocks.
In fact, Merge... | 2017-02-27T18:12:48 | 2017-02-27T18:12:48 | {
"+1": 1
} | CONTRIBUTOR |
ClickHouse | ClickHouse | 282,927,395 | 527 | sangli00 | ```
Code: 1000. DB::Exception: Received from localhost:9000, ::1. DB::Exception: System exception: cannot lock reader/writer lock.
```
this is error | 2017-02-28T02:58:53 | 2017-02-28T02:58:53 | {} | NONE |
ClickHouse | ClickHouse | 283,182,881 | 537 | robot-metrika-test | Can one of the admins verify this patch? | 2017-02-28T22:33:02 | 2017-02-28T22:33:02 | {} | NONE |
ClickHouse | ClickHouse | 284,093,240 | 542 | RoyBellingan | Hy, I am a coworker of George3d6,
Thank you very much for the response, and this AMAZING piece of software!
I would like to add some more info:
We left click house "digest" those 1B row / 165 colum, about 170GB of data for one night, but activity is still on.
We are running some preliminary test, so hardware spec... | 2017-03-03T22:54:47 | 2017-03-03T22:54:47 | {
"+1": 1
} | CONTRIBUTOR |
ClickHouse | ClickHouse | 280,639,112 | 506 | ludv1x | Yes https://clickhouse.yandex/reference_en.html#EXISTS | 2017-02-17T12:41:47 | 2017-02-17T12:41:47 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 280,688,125 | 505 | ilyas-pro | Ok, how I can explain this expamle:
SELECT
toStartOfMonth(toDateTime(1483210000), 'Asia/Tokyo'),
toStartOfMonth(toDateTime(1483210000), 'Europe/Moscow')\G
──────
toStartOfMonth(toDateTime(1483210000), \'Asia/Tokyo\'): 2017-01-01
toStartOfMonth(toDateTime(1483210000), \'Europe/Moscow\'): 2016-12-01
S... | 2017-02-17T15:56:39 | 2017-02-17T15:58:11 | {} | NONE |
ClickHouse | ClickHouse | 281,691,513 | 293 | f1yegor | @artpaul @alexey-milovidov couldn't find documentation for this functionality. should we update https://clickhouse.yandex/reference_en.html?
| 2017-02-22T14:54:25 | 2017-02-22T14:54:25 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 282,584,140 | 508 | alexey-milovidov | In all our installations, we have only one shard per server.
Having multiple shards on same server is possible, but more unusual.
You should locate tables for these shards on different databases at the server.
Then you should specify `<default_database>` (near `<host>`, `<port>`) with the name of corresponding dat... | 2017-02-26T20:21:18 | 2017-02-26T20:21:18 | {
"+1": 1
} | MEMBER |
ClickHouse | ClickHouse | 283,489,653 | 535 | alexey-milovidov | Ok, thanks! | 2017-03-01T22:19:19 | 2017-03-01T22:19:19 | {} | MEMBER |
ClickHouse | ClickHouse | 283,747,727 | 548 | alexey-milovidov | Too early. | 2017-03-02T19:05:59 | 2017-03-02T19:05:59 | {} | MEMBER |
ClickHouse | ClickHouse | 281,084,516 | 511 | chipitsine | ok, I can make "cleanup PR" on that
stay tuned.... | 2017-02-20T13:52:28 | 2017-02-20T13:52:28 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,071,872 | 534 | mfridental | Closing that pull request to replace it with mention to "make clickhouse" | 2017-02-28T15:33:26 | 2017-02-28T15:33:26 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,838,273 | 538 | sangli00 | @ztlpn
If i insert 1TB data to Clickhouse,How to do it? | 2017-03-03T01:31:55 | 2017-03-03T01:31:55 | {} | NONE |
ClickHouse | ClickHouse | 283,839,302 | 525 | sangli00 | @ztlpn Thanks | 2017-03-03T01:38:13 | 2017-03-03T01:38:13 | {} | NONE |
ClickHouse | ClickHouse | 280,668,648 | 291 | alexey-milovidov | Ok, but what is exact reason, why `libclang-dev` is needed? | 2017-02-17T14:45:37 | 2017-02-17T14:45:37 | {} | MEMBER |
ClickHouse | ClickHouse | 280,730,648 | 291 | vavrusa | So there are two stdlib.h, the one in `/usr/share/clickhouse/headers/usr/include/` which precedes the `-isystem /usr/share/clickhouse/headers/usr/include/c++/*/`. The `cstdlib` in the latter one has `#include_next <stdlib.h>` which requires inclusion of this header from the "next" include path, but there is none. So I'... | 2017-02-17T18:36:20 | 2017-02-17T18:36:20 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 282,530,286 | 293 | alexey-milovidov | Yes. | 2017-02-26T03:36:06 | 2017-02-26T03:36:06 | {} | MEMBER |
ClickHouse | ClickHouse | 282,592,736 | 521 | George3d6 | Well, the fact that you are considering the implementation is neat, its nice to see the project is very much evolving since I might use it as my storage for an rt~ish analytics infrastructure :)
The way I see it, that kind of INSERT statement would fix this issue. What I was thinking of was more of a column that doe... | 2017-02-26T22:16:30 | 2017-02-26T22:16:30 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 284,099,779 | 542 | alexey-milovidov | There are two types of background actions:
- merging of data parts;
- fetching of data part from replica.
When you use non-replicated MergeTree, there are only tasks of first type.
And when you use ReplicatedMergeTree, in fact there is single type of tasks, that look at replication queue and then doing either mer... | 2017-03-03T23:31:32 | 2017-03-03T23:31:32 | {} | MEMBER |
ClickHouse | ClickHouse | 280,640,165 | 503 | sangli00 | 1 copy this data to postgres database
2 copy postgres database from to directory
3 use ClickHouse-client copy to ClickHouse
Is very trouble
![Uploading 5784F93A-C638-4F3A-A79B-D653845DAFD6.png…]()
| 2017-02-17T12:47:44 | 2017-02-17T12:47:44 | {} | NONE |
ClickHouse | ClickHouse | 280,649,451 | 506 | sangli00 | so,which one is ClickHouse benchmarks? TPC-H&TPC-DS is test OLAP system . | 2017-02-17T13:29:50 | 2017-02-17T13:29:50 | {} | NONE |
ClickHouse | ClickHouse | 280,691,842 | 505 | ztlpn | While moments when the new hour starts do not differ between Moscow and Tokyo, moments when the new month (as well as new day, new week and new year) starts do differ. So 1483210000 is before 2017-01-01 00:00:00 (local time when the new month starts) in Moscow but after 2017-01-01 00:00:00 in Tokyo. Thus toStartOfMonth... | 2017-02-17T16:09:43 | 2017-02-17T16:12:41 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 282,754,975 | 508 | DeamonMV | And i found this issue.
i wrong make this section when create replicated table
```Distributed(calcs, default, hits[, sharding_key])```
I faced, that through time count of columns in distributed table increasing
```
:) select count() from events_tv11
SELECT count()
FROM events_tv11
┌──count()─┐
│ ... | 2017-02-27T15:37:32 | 2017-02-27T15:37:32 | {} | NONE |
ClickHouse | ClickHouse | 282,689,149 | 508 | DeamonMV | Hi thank you for reply. I'm tried today to configure that.
I have five "fast" server and two "slow" - "slow" server contain by five replicas of "fast" servers.
медленные сервера содержат пять реплик от быстрых серверов. на медленных соответсовенно пять баз. а имя таблици всегда одинаковое.
there are an example. ... | 2017-02-27T10:55:59 | 2017-02-27T15:37:55 | {} | NONE |
ClickHouse | ClickHouse | 283,074,837 | 535 | robot-metrika-test | Can one of the admins verify this patch? | 2017-02-28T15:43:04 | 2017-02-28T15:43:04 | {} | NONE |
ClickHouse | ClickHouse | 283,337,772 | 540 | HumanUser | > also please change path from /opt/clickhouse to /var/lib/clickhouse
Done | 2017-03-01T13:21:07 | 2017-03-01T13:21:07 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,422,318 | 542 | alexey-milovidov | You could do `SELECT * FROM system.merges` to show currently running background merges (also known as "compactions"). | 2017-03-01T18:13:58 | 2017-03-01T18:13:58 | {} | MEMBER |
ClickHouse | ClickHouse | 283,491,771 | 533 | ztlpn | Even simpler:
```sql
:) SELECT * FROM (SELECT [1, 2, 3] AS arr) ARRAY JOIN arr AS a
SELECT *
FROM
(
SELECT [1, 2, 3] AS arr
)
ARRAY JOIN arr AS a
┌─arr─────┐
│ [1,2,3] │
│ [1,2,3] │
│ [1,2,3] │
└─────────┘
3 rows in set. Elapsed: 0.002 sec.
```
The contents of `arr` column are as expected. Fro... | 2017-03-01T22:27:56 | 2017-03-01T22:28:28 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 280,663,973 | 506 | alexey-milovidov | Due to unconveniencies of supported JOIN syntax in ClickHouse, it is quite difficult to run TPC-H style benchmark. Most of benchmarks with ClickHouse are using "big flat table" schema. See https://github.com/yandex/ClickHouse/tree/master/doc/example_datasets directory for examples.
| 2017-02-17T14:28:20 | 2017-02-17T14:28:20 | {} | MEMBER |
ClickHouse | ClickHouse | 282,122,083 | 522 | mathewsp79 | I did try with uniqCombined & yes it did improve the time a little bit but still not as fast as the local table uniq .
I was playing with different query set ups & the closest i have to achieving this is like this given below.
What i did is
- for uniq used the local table
- for other calcs like Sum (), count... | 2017-02-23T21:10:19 | 2017-02-23T22:57:48 | {} | NONE |
ClickHouse | ClickHouse | 283,965,804 | 542 | George3d6 | Ok, since there was no update, I'm going to phrase the question otherwise:
If I were to modify the code in so I could be able to set a flag using the client in order to stop all background operations whilst running a query, do you think that would break anything important ? Or just slow down the merges ? | 2017-03-03T14:24:17 | 2017-03-03T14:24:17 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 280,657,245 | 505 | ztlpn | Could you provide expected results and how they differ from what you get? | 2017-02-17T14:02:03 | 2017-02-17T14:02:03 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 280,852,330 | 379 | alexey-milovidov | ClickHouse has functions to rounding Date and DateTime values, such as `toStartOfWeek`, `toStartOfHour`, `toStartOfFiveMinute`... These functions are well optimized and performance is comparable to just using pre-computed value.
`ontime` example schema have precomputed columns only because it is the original schema ... | 2017-02-18T15:26:02 | 2017-02-18T15:26:02 | {
"+1": 7
} | MEMBER |
ClickHouse | ClickHouse | 282,280,453 | 508 | DeamonMV | Can you add to this manual, case when couple of shard looks into the same server? | 2017-02-24T12:31:33 | 2017-02-24T12:31:33 | {} | NONE |
ClickHouse | ClickHouse | 282,577,438 | 521 | alexey-milovidov | > Without permanently storing "my_original_string" in the table. Is this possible?
Currently it is not possible.
What syntax do you prefer for implementation?
We are considering possibility to implement something like this:
```
INSERT INTO table (my_string) SELECT operation(my_original_string) FROM input(my_or... | 2017-02-26T18:49:03 | 2017-02-26T18:49:03 | {} | MEMBER |
ClickHouse | ClickHouse | 283,070,930 | 534 | proller | ```
make clickhouse
```
creates dbms/src/Server/clickhouse executable, you can use it with --client or --server | 2017-02-28T15:30:14 | 2017-02-28T15:30:27 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,329,748 | 540 | proller | also please change path from /opt/clickhouse to /var/lib/clickhouse | 2017-03-01T12:41:22 | 2017-03-01T12:41:22 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,641,748 | 542 | George3d6 | Isn't there any way to stop these merging when executing queries ? They appear to be quite greedy with both processor time and I/O. Can't I set a "limit" for how many resources merges can take ? Or even better, set the db up in such a way that merges stop when I run SELECT queries. | 2017-03-02T12:33:24 | 2017-03-02T12:33:24 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,838,596 | 371 | proller | done.
https://github.com/yandex/ClickHouse/commit/107fb86a40e39deaaa2a00d32fe2f0af6595348b | 2017-03-03T01:33:47 | 2017-03-03T01:33:47 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 280,667,353 | 506 | sangli00 | @alexey-milovidov
If i benchmark ClickHouse ,only reference https://github.com/yandex/ClickHouse/tree/master/doc/example_datasets?
I need compare some MPPDB ,Greenplum&HAWQ&DeepGreen...
Maybe I will test TB data in ClockHouse,so could you tell me good method? | 2017-02-17T14:40:43 | 2017-02-17T14:40:43 | {} | NONE |
ClickHouse | ClickHouse | 282,948,147 | 465 | vavrusa | Awesome, thanks a lot! I'll close this and reopen if there's a problem. | 2017-02-28T05:43:57 | 2017-02-28T05:43:57 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,017,151 | 467 | ludv1x | Materialized View over Distributed table don't distribute insertions among the cluster.
Only insertions into `default.insert_view_local` will be distributed.
You need create Materialized View over `insert_view_local` (not over `insert_view`) on each server. | 2017-02-28T11:39:42 | 2017-02-28T11:39:42 | {
"+1": 9
} | CONTRIBUTOR |
ClickHouse | ClickHouse | 283,327,333 | 540 | robot-metrika-test | Can one of the admins verify this patch? | 2017-03-01T12:28:02 | 2017-03-01T12:28:02 | {} | NONE |
ClickHouse | ClickHouse | 283,416,542 | 542 | ztlpn | Almost certainly this is the merge process. MergeTree table consists of a number of parts that are sorted by the primary key. Each INSERT statement creates at least one new part. Merge process periodically selects several parts and merges them into one bigger part that is also sorted by primary key. Yes, this is normal... | 2017-03-01T17:53:23 | 2017-03-01T17:53:23 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 282,573,455 | 499 | alexey-milovidov | Нет, почему-то до сих пор не исправлено.
И ещё:
```
[[ `grep -E "^#.*" $CLICKHOUSE_CRONFILE` == `cat $CLICKHOUSE_CRONFILE` ]];
```
разве не ясно, что этот код - мусор? | 2017-02-26T17:51:52 | 2017-02-26T17:51:52 | {} | MEMBER |
ClickHouse | ClickHouse | 282,576,661 | 525 | alexey-milovidov | ClickHouse does not support a data type for time with more than one second resolution.
Suggested solution is to store time with resolution up to seconds in one column of type DateTime and to store milliseconds/microseconds in separate column of type UInt16 or UInt32.
This is reasonable, because fractional part of... | 2017-02-26T18:38:23 | 2017-02-26T18:38:23 | {
"confused": 6
} | MEMBER |
ClickHouse | ClickHouse | 282,615,022 | 525 | sangli00 | oh ,This is bad info.
ClickHouse used LZ4 compresses ?
If i used Timestamp need tow column? | 2017-02-27T02:46:26 | 2017-02-27T02:46:26 | {} | NONE |
ClickHouse | ClickHouse | 282,642,529 | 467 | VictoryWangCN | config
```
<yandex>
<clickhouse_remote_servers>
<perftest_2shards_1replicas>
<shard>
<replica>
<host>localtest.clickhouse.shard1</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<ho... | 2017-02-27T06:56:55 | 2017-02-27T06:56:55 | {} | NONE |
ClickHouse | ClickHouse | 283,744,079 | 548 | robot-metrika-test | Can one of the admins verify this patch? | 2017-03-02T18:53:02 | 2017-03-02T18:53:02 | {} | NONE |
ClickHouse | ClickHouse | 283,840,327 | 532 | sangli00 | so,I will join table A,B,C,D,E.
select ... (select column from C any inner join (select columns from A any inner join B) using (ID) ) using (ID) ......
my god ,If i need A join B and A join C and A join D,
B Join C and B join D...
I can't write this SQL. is very trouble . | 2017-03-03T01:44:23 | 2017-03-03T01:44:23 | {} | NONE |
ClickHouse | ClickHouse | 284,117,995 | 547 | alexey-milovidov | Why `re2` namespace is used somewhere in FunctionsStringSearch.h instead of `re2_st`? | 2017-03-04T02:01:51 | 2017-03-04T02:01:51 | {} | MEMBER |
ClickHouse | ClickHouse | 280,663,637 | 503 | ludv1x | I downloaded `TPCH_Tools_v2.17.1.zip` and loaded data from `customer.tbl.150000` into ClickHouse.
You just need to remove last `|` in each line and replace `|` to `\t`.
After you can import data into ClickHouse using TabSeparated FORMAT.
```
sed 's/|$//g' customer.tbl.150000 | tr "|" "\t" | clickhouse-client -q... | 2017-02-17T14:27:06 | 2017-02-17T14:27:58 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 280,681,017 | 505 | ztlpn | Results are actually correct in this case. Below are some explanations.
Here is how toStartOf* functions work: they take a moment in time (a DateTime value) and a timezone, round this moment to the start of desired time period _in that timezone_ and return the resulting moment in time (again a DateTime value).
No... | 2017-02-17T15:31:28 | 2017-02-17T15:31:28 | {
"+1": 2
} | CONTRIBUTOR |
ClickHouse | ClickHouse | 280,873,301 | 508 | ludv1x | You can find an example in [Quick start guide](https://clickhouse.yandex/tutorial.html) -> paragraph "ClickHouse deployment to cluster".
Let me know If you will have difficulties with that guide; we can enhance it. | 2017-02-18T20:39:42 | 2017-02-18T20:39:42 | {
"+1": 1
} | CONTRIBUTOR |
ClickHouse | ClickHouse | 282,591,647 | 522 | alexey-milovidov | Also try to define `ad_product_name` dictionary attribute as `<injective>true</injective>`.
(Look at `injective` here: https://clickhouse.yandex/reference_en.html#External%20dictionaries)
In that case, dictionary values will be substituted after GROUP BY. | 2017-02-26T22:02:12 | 2017-02-26T22:02:12 | {} | MEMBER |
ClickHouse | ClickHouse | 282,865,540 | 527 | ztlpn | You can use `replaceRegexpOne` function to trim whitespace at the end of a string:
```sql
SELECT replaceRegexpOne(y, '\\s+$', '') FROM tx
``` | 2017-02-27T21:47:58 | 2017-02-27T21:47:58 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 282,927,292 | 527 | sangli00 | @ztlpn why can't ``` insert into t select * from t;```? | 2017-02-28T02:58:23 | 2017-02-28T02:58:23 | {} | NONE |
ClickHouse | ClickHouse | 283,027,442 | 467 | VictoryWangCN | so, The final sql statement is as follows:
```
CREATE TABLE IF NOT EXISTS insert_view_local(metricId Int64, applicationId Int64, agentRunId Int64, num1 Float64, num2 Float64, tc_startDate Date, tc_startTime UInt64) ENGINE = Null;
CREATE TABLE insert_view as insert_view_local ENGINE = Distributed(perftest_2shards_1... | 2017-02-28T12:33:39 | 2017-02-28T12:33:39 | {
"+1": 2,
"hooray": 1
} | NONE |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.