issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
ODBC Driver returns duplicate rows executing ODBC API SQLStatistics on a table.
Also, ORDINAL_POSITION returned in the resultset is zero.
**To Reproduce**
1. create some test tables with primary key, alternate key and some foreign keys. Example:
create table in_sync_cmp_type (
cmp_type_cd char(8) not null,
description varchar(30) not null,
ctrl_ins_dtm timestamp not null default CURRENT_TIMESTAMP,
ctrl_upd_dtm timestamp not null,
ctrl_usr_id varchar(256) not null,
constraint pk_in_sync_cmp_type primary key (cmp_type_cd)
);
/*----------------------------------------------------------------------------*/
/* Table: in_sync_user */
/*----------------------------------------------------------------------------*/
create table in_sync_user (
usr_oid int not null,
logid varchar(256) not null,
ctrl_ins_dtm timestamp not null default CURRENT_TIMESTAMP,
ctrl_upd_dtm timestamp not null,
ctrl_usr_id varchar(256) not null,
constraint pk_in_sync_user primary key (usr_oid),
constraint ak_isu_logid unique (logid)
);
/*----------------------------------------------------------------------------*/
/* Table: in_sync_resultset */
/*----------------------------------------------------------------------------*/
create table in_sync_resultset (
rs_oid int not null,
rs_type_cd char(8) not null,
script_oid int null,
script_upd_ind int null,
ctrl_ins_dtm timestamp not null default CURRENT_TIMESTAMP,
ctrl_upd_dtm timestamp not null,
ctrl_usr_id varchar(256) not null,
constraint pk_in_sync_resultset primary key (rs_oid)
);
create index ix1in_sync_resultset on in_sync_resultset (
ctrl_usr_id,
rs_type_cd,
ctrl_ins_dtm
);
create table in_sync_data_source (
ds_oid integer not null,
dbms_name varchar(256) not null,
server_name varchar(256) null,
cluster_id varchar(256) null,
database_name varchar(520) null,
logid varchar(256) null,
owner_oid int null,
root_rs_oid int null,
readonly_ind int not null,
ctrl_ins_dtm timestamp not null default CURRENT_TIMESTAMP,
ctrl_upd_dtm timestamp not null,
ctrl_usr_id varchar(256) not null,
constraint pk_in_sync_data_source primary key (ds_oid)
);
alter table in_sync_data_source
add constraint fk_isds_isu foreign key (owner_oid)
references in_sync_user (usr_oid)
on update restrict
on delete restrict;
alter table in_sync_data_source
add constraint fk_isds_root_rs_oid foreign key (root_rs_oid)
references in_sync_resultset (rs_oid)
on update restrict
on delete restrict;
2. Execute ODBC function SQLStatistics on one table. Example for table, in_sync_user
3. Driver returns duplicate rows all with 0 ORDINAL_POSITION
**Expected behavior**
In the example above for table in_sync_user, driver should returns only two rows with the name of primary and alternate keys defined on the table and both rows should have ORDINAL_POSITION of 1.
**Screenshots**
Attached screen shot of SQLStatistic resulset rows executed on the example table in_sync_user described above.
**Software versions**
MonetDB 11.41.0013
ODBC Driver: MonetDBODBClib 11.41.0013 Jul2021-SP2
- OS and version: Windows 10 x64
- Installed from release package
**Issue labeling **
ODBC Driver SQLStatistics extra rows
**Additional context**
Looks like the returned resultset has rows for the indexes of other tables but all labeled with the name of the table SQLStatistics is executed on.

| ODBC Driver SQLStatistics returns duplicate rows/rows for other tables | https://api.github.com/repos/MonetDB/MonetDB/issues/7215/comments | 1 | 2022-01-08T04:19:05Z | 2024-06-27T13:16:46Z | https://github.com/MonetDB/MonetDB/issues/7215 | 1,096,828,939 | 7,215 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I am not sure if this is a bug or a feature request. It seems that python loader functions work also with temporary tables in case I first create the table and then copy into it from loader function. However, it does not work if I try to create temp table from loader function.
**To Reproduce**
BEGIN TRANSACTION;
CREATE TEMP TABLE test_table(a INT) on commit drop;
CREATE OR REPLACE LOADER myloader(c INT) LANGUAGE PYTHON {
_emit.emit( { 'a' : c+1 } )
};
COPY LOADER INTO test_table FROM myloader((select 5));
select * from test_table;
COMMIT;
The above flow works and returns 6
Using this syntax:
sql> CREATE TEMP TABLE TEST1 from LOADER myloader((select 5));
syntax error in: "create temp table test1 from"
However this syntax is allowed for persistent tables:
sql> CREATE TABLE TEST1 from LOADER myloader((select 5));
operation successful
It seems that loader functions can be used to load temporary tables, but this is not supported by the parser. | Loader functions with temp tables. | https://api.github.com/repos/MonetDB/MonetDB/issues/7214/comments | 2 | 2022-01-05T11:24:52Z | 2024-06-27T13:16:45Z | https://github.com/MonetDB/MonetDB/issues/7214 | 1,094,249,811 | 7,214 |
[
"MonetDB",
"MonetDB"
] | I'm using MonetDB 11.41.5 and I'm trying to bulk upload a very large CSV file (70 Gb) to a table using COPY INTO.
After a few minutes, I get the following message:

The complete COPY INTO sentence is:
`COPY OFFSET 2 INTO brasil.staging_rm_fact FROM '/opt/data/import/Brasil/202108/stagings/staging_rm_fact.csv' USING DELIMITERS ';','\n','"' NULL AS '';
`
- Disk space is enough to ensure the sucess of operation (more than 600 Gb of free space).
- The problem doesn't happen using previous version of MonetDB (11.39.11). Works perfectly in exactly the same conditions: same server, same free disk space and same large CSV file.
CSV file has 225 millions of lines. In order to help to reproduce the error, I attached a reduced version of 100.000 sample lines of the same file. You can repeat the lines to create a 225 million lines file. Please, keep the header as the first line in the CSV file.
[sample.zip](https://github.com/MonetDB/MonetDB/files/7784921/sample.zip)
The target table is created with this sentence:
[table.zip](https://github.com/MonetDB/MonetDB/files/7784934/table.zip)
If you need more information, please feel free to ask me. I'm going to try to get some debug information, as you suggested in Staackoverflow using:
```
CALL logging.setcomplevel('HEAP', 'DEBUG');
CALL logging.setflushlevel('DEBUG');
``` | "Failed to extend the BAT" error in MonetDB 11.41.5 | https://api.github.com/repos/MonetDB/MonetDB/issues/7213/comments | 1 | 2021-12-28T13:53:03Z | 2024-06-07T11:59:17Z | https://github.com/MonetDB/MonetDB/issues/7213 | 1,089,911,324 | 7,213 |
[
"MonetDB",
"MonetDB"
] | **Is your feature request related to a problem? Please describe.**
it excluding the stop value now, the behavior is different from many other dbms, such as sqlite, postgres,duckdb and so on.
```sql
sql>select * from generate_series(1,8);
+-------+
| value |
+=======+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
+-------+
7 tuples
```
**Describe the solution you'd like**
```sql
sql>select * from generate_series(1,8);
+-------+
| value |
+=======+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
+-------+
8 tuples
``` | the generate_series(begin,stop) results a series including the stop value | https://api.github.com/repos/MonetDB/MonetDB/issues/7212/comments | 2 | 2021-12-23T13:31:06Z | 2024-06-27T13:16:44Z | https://github.com/MonetDB/MonetDB/issues/7212 | 1,087,720,209 | 7,212 |
[
"MonetDB",
"MonetDB"
] | Some times MonetDB gets stuck by the same error, e.g. when a database can't be started while the application keeps trying. The merovingian.log and tracer log get polluted by the same error messages repeated for a huge number of times. This can push earlier more useful messages out of the log.
Would be nice to be able to ignore such large repeat of the same error message. | Don't repeatedly log the same errors | https://api.github.com/repos/MonetDB/MonetDB/issues/7211/comments | 1 | 2021-12-10T18:11:18Z | 2023-09-18T07:53:28Z | https://github.com/MonetDB/MonetDB/issues/7211 | 1,077,106,610 | 7,211 |
[
"MonetDB",
"MonetDB"
] | Configure the tracer to log messages from different levels to different files. In this way, during log rotation, one can more easily choose to keep the more important messages for a longer time than, e.g., info. messages. | Tracer log different messages to different files | https://api.github.com/repos/MonetDB/MonetDB/issues/7210/comments | 1 | 2021-12-10T18:06:29Z | 2021-12-14T16:38:09Z | https://github.com/MonetDB/MonetDB/issues/7210 | 1,077,103,144 | 7,210 |
[
"MonetDB",
"MonetDB"
] | Would be nice to be able to configure what is being logged in the merovingian.log. In particular, a way to not log the large amount of connection information, because they can push away more useful (error) messages in log rotation. | Configuration option for merovingian.log | https://api.github.com/repos/MonetDB/MonetDB/issues/7209/comments | 5 | 2021-12-10T18:03:26Z | 2024-06-27T13:16:42Z | https://github.com/MonetDB/MonetDB/issues/7209 | 1,077,100,793 | 7,209 |
[
"MonetDB",
"MonetDB"
] | I'm retrieving the number of unique values as a function using count distinct:
`select count(distinct "commodity_type_") from "bb243f60c9eDM_FULFile___out_"`
I can also perform the same calculation using grouping and sub-queries:
`select count(*) from (select "commodity_type_" from "bb243f60c9eDM_FULFile___out_" group by "commodity_type_") "x"`
The former takes **300ms**, the latter **30ms**.
And as part of a larger query, _count distinct_ turns a 300ms query into 7000ms.
It seems to be that Monetdb is capable of performing this high level calculation fast, but as count distinct it is not optimised.
Is this a bug? If not, are there any quick wins here?
Oct 2020 version.
```
27 X_1=0@0:void := querylog.define("trace select count(distinct \"commodity_type_\") from \"bb243f60c9eDM_FULFile___out_\"\n;":str, "default_pipe":str, -1805369535:int);
1 name="monetdb":str := clients.getUsername();
4 start="2021-12-02 12:03:33.676660":timestamp := mtime.current_timestamp();
1763 querylog.append("trace select count(distinct \"commodity_type_\") from \"bb243f60c9eDM_FULFile___out_\"\n;":str, "default_pipe":str, name="monetdb":str, start="2021-12-02 12:03:33.676660":timestamp);
24 args="function user.main():void;":str := sql.argRecord();
1 xtime=1638446613678484:lng := alarm.usec();
7 (user=0:lng, nice=0:lng, sys=0:lng, idle=0:lng, iowait=0:lng) := profiler.cpustats();
0 tuples=1:lng := 1:lng;
18 X_4=0:int := sql.mvc();
28 C_5=[6413587]:bat[:oid] := sql.tid(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str);
19 X_8=[6413587]:bat[:str] := sql.bind(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, "commodity_type_":str, 0:int);
24 X_17=[6413587]:bat[:str] := algebra.projection(C_5=[6413587]:bat[:oid], X_8=[6413587]:bat[:str]);
291592 C_18=[3]:bat[:oid] := algebra.unique(X_17=[6413587]:bat[:str]); # unique: new partial hash
40 X_19=[3]:bat[:str] := algebra.projection(C_18=[3]:bat[:oid], X_17=[6413587]:bat[:str]); # project_bte
26 language.pass(X_17=[6413587]:bat[:str]);
25 X_20=3:lng := aggr.count(X_19=[3]:bat[:str], true:bit);
291997 barrier X_77=false:bit := language.dataflow();
92 sql.resultSet("sys.%1":str, "%1":str, "bigint":str, 64:int, 0:int, 7:int, X_20=3:lng);
0 X_100=1638446613970629:lng := alarm.usec();
2 xtime=292145:lng := calc.-(X_100=1638446613970629:lng, xtime=292145:lng);
0 rtime=1638446613970640:lng := alarm.usec();
0 X_104=1638446613970642:lng := alarm.usec();
0 rtime=2:lng := calc.-(X_104=1638446613970642:lng, rtime=2:lng);
3 finish="2021-12-02 12:03:33.970648":timestamp := mtime.current_timestamp();
9 (load=0:int, io=0:int) := profiler.cpuload(user=0:lng, nice=0:lng, sys=0:lng, idle=0:lng, iowait=0:lng);
1691 querylog.call(start="2021-12-02 12:03:33.676660":timestamp, finish="2021-12-02 12:03:33.970648":timestamp, args="function user.main():void;":str, tuples=1:lng, xtime=292145:lng, rtime=2:lng, load=0:int, io=0:int);
```
vs
```
25 X_1=0@0:void := querylog.define("trace select count(*) from (select \"commodity_type_\" from \"bb243f60c9eDM_FULFile___out_\" group by \"commodity_type_\") \"x\"\n;":str, "default_pipe":str, -1780745219:int);
1 name="monetdb":str := clients.getUsername();
3 start="2021-12-02 12:03:58.300668":timestamp := mtime.current_timestamp();
6744 querylog.append("trace select count(*) from (select \"commodity_type_\" from \"bb243f60c9eDM_FULFile___out_\" group by \"commodity_type_\") \"x\"\n;":str, "default_pipe":str, name="monetdb":str, start="2021-12-02 12:03:58.300668":timestamp);
24 args="function user.main():void;":str := sql.argRecord();
1 xtime=1638446638307471:lng := alarm.usec();
7 (user=0:lng, nice=0:lng, sys=0:lng, idle=0:lng, iowait=0:lng) := profiler.cpustats();
1 tuples=1:lng := 1:lng;
19 X_4=0:int := sql.mvc();
34 C_85=[801701]:bat[:oid] := sql.tid(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, 7:int, 8:int);
36 X_92=[801698]:bat[:str] := sql.bind(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, "commodity_type_":str, 0:int, 6:int, 8:int);
34 C_83=[801698]:bat[:oid] := sql.tid(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, 6:int, 8:int);
33 X_91=[801698]:bat[:str] := sql.bind(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, "commodity_type_":str, 0:int, 5:int, 8:int);
37 C_81=[801698]:bat[:oid] := sql.tid(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, 5:int, 8:int);
58 C_71=[801698]:bat[:oid] := sql.tid(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, 0:int, 8:int);
27 X_86=[801698]:bat[:str] := sql.bind(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, "commodity_type_":str, 0:int, 0:int, 8:int);
28 C_73=[801698]:bat[:oid] := sql.tid(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, 1:int, 8:int);
25 X_90=[801698]:bat[:str] := sql.bind(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, "commodity_type_":str, 0:int, 4:int, 8:int);
27 X_118=[801698]:bat[:str] := algebra.projection(C_83=[801698]:bat[:oid], X_92=[801698]:bat[:str]);
28 X_89=[801698]:bat[:str] := sql.bind(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, "commodity_type_":str, 0:int, 3:int, 8:int);
24 C_77=[801698]:bat[:oid] := sql.tid(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, 3:int, 8:int);
26 X_87=[801698]:bat[:str] := sql.bind(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, "commodity_type_":str, 0:int, 1:int, 8:int);
25 X_117=[801698]:bat[:str] := algebra.projection(C_81=[801698]:bat[:oid], X_91=[801698]:bat[:str]);
26 X_88=[801698]:bat[:str] := sql.bind(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, "commodity_type_":str, 0:int, 2:int, 8:int);
26 X_112=[801698]:bat[:str] := algebra.projection(C_71=[801698]:bat[:oid], X_86=[801698]:bat[:str]);
24 C_75=[801698]:bat[:oid] := sql.tid(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, 2:int, 8:int);
22 C_79=[801698]:bat[:oid] := sql.tid(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, 4:int, 8:int);
26 X_115=[801698]:bat[:str] := algebra.projection(C_77=[801698]:bat[:oid], X_89=[801698]:bat[:str]);
26 X_113=[801698]:bat[:str] := algebra.projection(C_73=[801698]:bat[:oid], X_87=[801698]:bat[:str]);
27 X_114=[801698]:bat[:str] := algebra.projection(C_75=[801698]:bat[:oid], X_88=[801698]:bat[:str]);
23 X_93=[801701]:bat[:str] := sql.bind(X_4=0:int, "sys":str, "bb243f60c9eDM_FULFile___out_":str, "commodity_type_":str, 0:int, 7:int, 8:int);
45 X_116=[801698]:bat[:str] := algebra.projection(C_79=[801698]:bat[:oid], X_90=[801698]:bat[:str]);
25 X_119=[801701]:bat[:str] := algebra.projection(C_85=[801701]:bat[:oid], X_93=[801701]:bat[:str]);
5716 (X_139=[801698]:bat[:oid], C_140=[3]:bat[:oid]) := group.groupdone(X_116=[801698]:bat[:str]);
5986 (X_127=[801698]:bat[:oid], C_128=[3]:bat[:oid]) := group.groupdone(X_113=[801698]:bat[:str]);
5954 (X_151=[801701]:bat[:oid], C_152=[3]:bat[:oid]) := group.groupdone(X_119=[801701]:bat[:str]);
6428 (X_123=[801698]:bat[:oid], C_124=[3]:bat[:oid]) := group.groupdone(X_112=[801698]:bat[:str]);
6462 (X_143=[801698]:bat[:oid], C_144=[3]:bat[:oid]) := group.groupdone(X_117=[801698]:bat[:str]);
6518 (X_147=[801698]:bat[:oid], C_148=[3]:bat[:oid]) := group.groupdone(X_118=[801698]:bat[:str]);
6467 (X_135=[801698]:bat[:oid], C_136=[3]:bat[:oid]) := group.groupdone(X_115=[801698]:bat[:str]);
1042 X_142=[3]:bat[:str] := algebra.projection(C_140=[3]:bat[:oid], X_116=[801698]:bat[:str]); # project_bte
121 X_154=[3]:bat[:str] := algebra.projection(C_152=[3]:bat[:oid], X_119=[801701]:bat[:str]); # project_bte
947 X_130=[3]:bat[:str] := algebra.projection(C_128=[3]:bat[:oid], X_113=[801698]:bat[:str]); # project_bte
87 language.pass(X_116=[801698]:bat[:str]);
29 language.pass(X_113=[801698]:bat[:str]);
40 language.pass(X_119=[801701]:bat[:str]);
29 X_138=[3]:bat[:str] := algebra.projection(C_136=[3]:bat[:oid], X_115=[801698]:bat[:str]); # project_bte
35 X_150=[3]:bat[:str] := algebra.projection(C_148=[3]:bat[:oid], X_118=[801698]:bat[:str]); # project_bte
33 X_146=[3]:bat[:str] := algebra.projection(C_144=[3]:bat[:oid], X_117=[801698]:bat[:str]); # project_bte
41 X_126=[3]:bat[:str] := algebra.projection(C_124=[3]:bat[:oid], X_112=[801698]:bat[:str]); # project_bte
20 language.pass(X_118=[801698]:bat[:str]);
24 language.pass(X_115=[801698]:bat[:str]);
21 language.pass(X_117=[801698]:bat[:str]);
20 language.pass(X_112=[801698]:bat[:str]);
26 X_173=[3]:bat[:str] := mat.packIncrement(X_126=[3]:bat[:str], 8:int);
22 X_175=[6]:bat[:str] := mat.packIncrement(X_173=[6]:bat[:str], X_130=[3]:bat[:str]);
8600 (X_131=[801698]:bat[:oid], C_132=[3]:bat[:oid]) := group.groupdone(X_114=[801698]:bat[:str]);
22 X_134=[3]:bat[:str] := algebra.projection(C_132=[3]:bat[:oid], X_114=[801698]:bat[:str]); # project_bte
24 language.pass(X_114=[801698]:bat[:str]);
22 X_176=[9]:bat[:str] := mat.packIncrement(X_175=[9]:bat[:str], X_134=[3]:bat[:str]);
18 X_177=[12]:bat[:str] := mat.packIncrement(X_176=[12]:bat[:str], X_138=[3]:bat[:str]);
16 X_178=[15]:bat[:str] := mat.packIncrement(X_177=[15]:bat[:str], X_142=[3]:bat[:str]);
16 X_179=[18]:bat[:str] := mat.packIncrement(X_178=[18]:bat[:str], X_146=[3]:bat[:str]);
15 X_180=[21]:bat[:str] := mat.packIncrement(X_179=[21]:bat[:str], X_150=[3]:bat[:str]);
16 X_17=[24]:bat[:str] := mat.packIncrement(X_180=[24]:bat[:str], X_154=[3]:bat[:str]);
45 (X_18=[24]:bat[:oid], C_19=[3]:bat[:oid]) := group.groupdone(X_17=[24]:bat[:str]);
20 X_21=[3]:bat[:str] := algebra.projection(C_19=[3]:bat[:oid], X_17=[24]:bat[:str]);
16 language.pass(X_17=[24]:bat[:str]);
18 X_22=3:lng := aggr.count(X_21=[3]:bat[:str]);
9871 barrier X_183=false:bit := language.dataflow();
59 sql.resultSet("sys.%1":str, "%1":str, "bigint":str, 64:int, 0:int, 7:int, X_22=3:lng);
1 X_214=1638446638317437:lng := alarm.usec();
2 xtime=9966:lng := calc.-(X_214=1638446638317437:lng, xtime=9966:lng);
0 rtime=1638446638317446:lng := alarm.usec();
0 X_218=1638446638317448:lng := alarm.usec();
0 rtime=2:lng := calc.-(X_218=1638446638317448:lng, rtime=2:lng);
3 finish="2021-12-02 12:03:58.317454":timestamp := mtime.current_timestamp();
7 (load=0:int, io=0:int) := profiler.cpuload(user=0:lng, nice=0:lng, sys=0:lng, idle=0:lng, iowait=0:lng);
1297 querylog.call(start="2021-12-02 12:03:58.300668":timestamp, finish="2021-12-02 12:03:58.317454":timestamp, args="function user.main():void;":str, tuples=1:lng, xtime=9966:lng, rtime=2:lng, load=0:int, io=0:int);
``` | Dramatic difference in performance for two similar queries | https://api.github.com/repos/MonetDB/MonetDB/issues/7208/comments | 2 | 2021-12-02T12:04:55Z | 2024-06-07T12:01:17Z | https://github.com/MonetDB/MonetDB/issues/7208 | 1,069,470,513 | 7,208 |
[
"MonetDB",
"MonetDB"
] | I know MonetDB is a self-indexing database but how does this actually work? I am benchmarking time-series database systems with data of the following format:
```
sql>select * from datapoints limit 5;
+----------------------------+------------+--------------------------+--------------------------+--------------------------+--------------------------+--------------------------+
| time | id_station | temperature | discharge | ph | oxygen | oxygen_saturation |
+============================+============+==========================+==========================+==========================+==========================+==========================+
| 2019-03-01 00:00:00.000000 | 0 | 407.052 | 0.954 | 7.79 | 12.14 | 12.14 |
| 2019-03-01 00:00:10.000000 | 0 | 407.052 | 0.954 | 7.79 | 12.13 | 12.13 |
+----------------------------+------------+--------------------------+--------------------------+--------------------------+--------------------------+--------------------------+
```
The fields of the data are the `time`, the `station_id`, and the other sensor readings.
I am creating a database with the `time` as a primary key, as follows:
```
CREATE TABLE datapoints (
time TIMESTAMP NOT NULL PRIMARY KEY,
id_station INTEGER,
temperature DOUBLE PRECISION ,
discharge DOUBLE PRECISION ,
pH DOUBLE PRECISION ,
oxygen DOUBLE PRECISION ,
oxygen_saturation DOUBLE PRECISION
);
```
I would also like to index data on the `id_station`, since most of the queries would be using it to locate the data.
Should I create another index on the column `id_station`? On the [documentation](https://www.monetdb.org/Documentation/SQLLanguage/DataDefinition/IndexDefinitions) it says that MonetDB would take the `create index` statements like suggestions and are often neglected.
The following [article](https://dev.to/yugabyte/think-about-primary-key-indexes-before-anything-else-o5m) mentions that the order of columns used in the index is very important `(id_station, time)` vs `(time, id_station`).
Does MonetDB create compound indexes as well or is it just per column? What indexing strategy is being used? How should I assert the optimal indexing strategy for my use case?
| Time Series Data Indexing for Performance Benchmarking | https://api.github.com/repos/MonetDB/MonetDB/issues/7207/comments | 1 | 2021-11-29T18:37:41Z | 2024-06-27T13:16:41Z | https://github.com/MonetDB/MonetDB/issues/7207 | 1,066,368,794 | 7,207 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
Python UDFs that return a table fail when they return an empty table defined as a dictionary of numpy arrays.
**To Reproduce**
This succeeds (1 row returned):
```
CREATE OR REPLACE function f()
returns table(s STRING, i INT)
LANGUAGE PYTHON {
result = dict()
result['s'] = numpy.array(["test"], dtype=object)
result['i'] = numpy.array([5], dtype=int)
return(result)
};
select * from f();
+------+------+
| s | i |
+======+======+
| test | 5 |
+------+------+
1 tuple
```
This fails (0 rows returned):
```
CREATE OR REPLACE function f()
returns table(s STRING, i INT)
LANGUAGE PYTHON {
result = dict()
result['s'] = numpy.array([], dtype=object)
result['i'] = numpy.array([], dtype=int)
return(result)
};
select * from f();
Error converting dict return value "s": An array of size 0 was returned, yet we expect a list of 1 columns. The result is invalid..
```
Note that returning a list of lists, `return([[],[]])`, succeeds.
**Expected behavior**
Empty table returned
**Software versions**
- MonetDB 11.39.18
- OS and version: Fedora 34
- Compiled from sources
| Python UDF fails when returning an empty table as a dictionary | https://api.github.com/repos/MonetDB/MonetDB/issues/7206/comments | 1 | 2021-11-25T16:07:16Z | 2024-06-27T13:16:40Z | https://github.com/MonetDB/MonetDB/issues/7206 | 1,063,756,431 | 7,206 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
When performing joins over nested queries (even simple ones), the performance varies drastically based on the number and position of the nested queries. For example, we can have a case where having all tables in the join as nested queries results in good performance but unnesting one of those tables increases the execution time in over 1000x.
**To Reproduce**
- Create a simple table and insert some data:
```sql
CREATE TABLE T (k int, v int);
INSERT INTO T VALUES (1, 1), (2, 2);
INSERT INTO T SELECT * FROM T;
INSERT INTO T SELECT * FROM T;
INSERT INTO T SELECT * FROM T;
INSERT INTO T SELECT * FROM T;
INSERT INTO T SELECT * FROM T;
INSERT INTO T SELECT * FROM T;
INSERT INTO T SELECT * FROM T;
INSERT INTO T SELECT * FROM T;
INSERT INTO T SELECT * FROM T;
-- 1024 rows
```
- Perform joins with different nested tables and check the varying performance:
```sql
-- all normal (~2ms)
SELECT 1
FROM t t1, t t2, t t3, t t4
WHERE t1.k = t2.k
AND t2.k = t3.k
AND t3.k = t4.k
AND t4.k = 3;
-- 3 and 4 nested (~6.5s)
SELECT 1
FROM t t1, t t2, (SELECT * FROM t) t3, (SELECT * FROM t) t4
WHERE t1.k = t2.k
AND t2.k = t3.k
AND t3.k = t4.k
AND t4.k = 3;
-- 2, 3, and 4 nested (~5ms)
SELECT 1
FROM t t1, (SELECT * FROM t) t2, (SELECT * FROM t) t3, (SELECT * FROM t) t4
WHERE t1.k = t2.k
AND t2.k = t3.k
AND t3.k = t4.k
AND t4.k = 3;
```
**Expected behavior**
Same (few ms) performance across all queries.
**Software versions**
- `monetdb -v`: `MonetDB Database Server Toolkit v11.41.11 (Jul2021-SP1)`.
- OS: `Ubuntu 20.04.3 LTS`;
- Monetdb installed from release packages with `apt` (packages `monetdb5-sql` and `monetdb-client`);
Can confirm that it also happens in the Jan22 branch.
**Additional context**
I tried to find a pattern for this behavior but had little success. I only noticed that the final table in the join must be nested for the problem to occur. I ran a small script to test all combinations with 4 joins and found this (black cells represent nested tables):

As far as I am aware this only happens with joins of size 4 and higher.
| Unpredictable performance when performing joins over nested queries | https://api.github.com/repos/MonetDB/MonetDB/issues/7205/comments | 4 | 2021-11-25T09:37:50Z | 2024-06-27T13:16:39Z | https://github.com/MonetDB/MonetDB/issues/7205 | 1,063,352,417 | 7,205 |
[
"MonetDB",
"MonetDB"
] | I am trying to evaluate the ability of MonetDB on performing continuous queries on time series data.
Continuous queries on time series data are queries that run automatically and periodically on real-time data (and maybe store query results in a specified measurement). Some Time Series Database Systems (TSDBs) have a built-in operation that allows doing this (e.g. InfluxDB, TimescaleDB).
I couldn't find in MonetDB's documentation a way to perform this type of query. Another alternative would be to program the frequency of queries in a separate programming language/system clock that would run the given query frequently and automatically every time length. This is however not guaranteed to be fair 1) to the system because the performance would depend on an independent factor (the third-party programming language) and 2) to the other systems because they provide an extra convenient feature that MonetDB does not.
My question is there an efficient way that could be used for the purpose of performing continuous queries? If not, what are the alternative I have using MonetDB in order to perform the continuous queries?
Thanks!
| Continuous Queries Support | https://api.github.com/repos/MonetDB/MonetDB/issues/7204/comments | 1 | 2021-11-24T12:37:44Z | 2021-12-01T13:18:21Z | https://github.com/MonetDB/MonetDB/issues/7204 | 1,062,377,649 | 7,204 |
[
"MonetDB",
"MonetDB"
] | We're using a JOIN query where the constraint is logically `LeftField=RightField`, but have nulls present and wish to match them as distinct values, so write `LeftField=RightField OR (LeftField IS NULL AND RightField IS NULL)`.
A particular query using this approach takes **30 seconds** to execute (6.5 million rows) on modern commodity hardware.
Taking away the NULL handling and using just `LeftField=RightField` brings it down to **800ms**.
Equally, so does using `COALESCE(LeftField, '')=COALESCE(RightField, '')` (although this is messy and assumes `''` is not present in the data).
These are text fields being matched, so we cannot explore IMPRINT indexes.
I wonder, is this expected, or is something wrong here? If the latter I can prepare a test case.
Version 11.39.17 on Mac 11.6. | Severely degraded Join performance with IS NULL | https://api.github.com/repos/MonetDB/MonetDB/issues/7203/comments | 8 | 2021-11-24T09:43:37Z | 2021-11-25T09:43:11Z | https://github.com/MonetDB/MonetDB/issues/7203 | 1,062,205,662 | 7,203 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
Performing a DISTINCT results in duplicate rows if we sort the table by additional columns than the ones that are projected. This is not critical as the additional columns can be removed from the sort, however it can cause problems with dynamically generated queries (which is how this bug was discovered).
**To Reproduce**
- Create a table and populate with some data:
```sql
CREATE TABLE T (t1 int, t2 int);
INSERT INTO t VALUES (1, 1), (1, 2);
```
- DISTINCT with single sort column works as expected:
```sql
SELECT DISTINCT t1
FROM T
ORDER BY t1;
Returns:
+------+
| t1 |
+======+
| 1 |
+------+
```
- DISTINCT when sorting by both columns returns repeated rows:
```sql
SELECT DISTINCT t1
FROM T
ORDER BY t1, t2;
Returns:
+------+
| t1 |
+======+
| 1 |
| 1 |
+------+
```
**Expected behavior**
Return just one row with value 1.
**Software versions**
- `monetdb -v`: `MonetDB Database Server Toolkit v11.41.11 (Jul2021-SP1)`.
- OS: `Ubuntu 20.04.3 LTS`;
- Monetdb installed from release packages with `apt` (packages `monetdb5-sql` and `monetdb-client`); | DISTINCT does not work when sorting by additional columns | https://api.github.com/repos/MonetDB/MonetDB/issues/7202/comments | 3 | 2021-11-22T16:12:50Z | 2024-06-27T13:16:38Z | https://github.com/MonetDB/MonetDB/issues/7202 | 1,060,339,819 | 7,202 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
When performing some selections on a subquery with a left join, it causes the join to fail. It requires a combination of specific filters, indexes, and `NOT NULL` properties for it to occur.
**To Reproduce**
Create two tables, naturally joined by two columns `x1, x2`, with the first table indexed by `x1, x2` and `x2`, and with `NOT NULL` properties:
```sql
CREATE TABLE T1 (x1 int NOT NULL, x2 int NOT NULL, y int NOT NULL);
CREATE INDEX T1_x1_x2 ON T1 (x1, x2);
CREATE INDEX T1_x2 ON T1 (x2);
CREATE TABLE T2 (x1 int NOT NULL, x2 int NOT NULL, z int NOT NULL);
CREATE INDEX T2_x1_x2 ON T2 (x1, x2);
```
Populate with some data:
```sql
INSERT INTO T1 VALUES (1, 0, 1), (1, 2, 1);
INSERT INTO T2 VALUES (1, 0, 3), (1, 2, 100);
```
Perform the subquery first to see that it is working correctly:
```sql
SELECT T1.*, T2.x1 as t2_x1, z
FROM T1
LEFT JOIN T2 ON T1.x1 = T2.x1 AND T1.x2 = T2.x2
WHERE 10 <= T2.z OR T2.z IS NULL; -- (x1, x2, z) = (1, 0, 3) is dropped here, as 10 <= 3 is False
Returns:
+------+------+------+-------+------+
| x1 | x2 | y | t2_x1 | z |
+======+======+======+=======+======+
| 1 | 2 | 1 | 1 | 100 |
+------+------+------+-------+------+
```
Perform the filter on the subquery to check that the join fails (`t2_x1` and `z` become null, and the other row is also returned):
```sql
SELECT *
FROM (
SELECT T1.*, T2.x1 as t2_x1, z
FROM T1
LEFT JOIN T2 ON T1.x1 = T2.x1 AND T1.x2 = T2.x2
WHERE 10 <= T2.z OR T2.z IS NULL
) T
WHERE T.x1 = 1; -- this filter should return the same row
Returns:
+------+------+------+-------+------+
| x1 | x2 | y | t2_x1 | z |
+======+======+======+=======+======+
| 1 | 0 | 1 | null | null |
| 1 | 2 | 1 | null | null |
+------+------+------+-------+------+
```
**Expected behavior**
Return the same result as the nested query.
**Software versions**
- `monetdb -v`: `MonetDB Database Server Toolkit v11.41.11 (Jul2021-SP1)`.
- OS: `Ubuntu 20.04.3 LTS`;
- Monetdb installed from release packages with `apt` (packages `monetdb5-sql` and `monetdb-client`);
**Additional context**
There are already some things I discovered that should help narrow the issue:
- Both indexes on T1 must be created, as without one of them the query works. The index on T2 is not relevant;
- The column x2 of T1 must be tagged with `NOT NULL`. The `NOT NULL` in the other columns is not relevant;
- There must be a filter after the subquery in one of the columns of T1 (`x1`, `x2`, or `y`). Filters done to `t2_x1` or `z` make the join succeed (e.g., `WHERE T.z = 100`). Likewise, the nested query alone with no filter also returns the correct result;
- Manually pushing the filter `WHERE T.x1 = 1` to the subquery works correctly;
- Removing the `T2.z IS NULL` from the subquery also makes the join succeed.
| Selection of a subquery with a LEFT JOIN returns the wrong result set | https://api.github.com/repos/MonetDB/MonetDB/issues/7201/comments | 2 | 2021-11-22T16:12:46Z | 2024-06-27T13:16:37Z | https://github.com/MonetDB/MonetDB/issues/7201 | 1,060,339,724 | 7,201 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
The unique constraint imposed by the PRIMARY KEY is violated when there are concurrent inserts, resulting in multiple rows with the same key.
**To Reproduce**
- Create a table with a primary key:
```sql
CREATE TABLE T (k int PRIMARY KEY, v int);
```
- Check that the unique constraint is working correctly:
```sql
INSERT INTO T VALUES (1, 1); -- ok
INSERT INTO T VALUES (1, 1); -- fails (INSERT INTO: PRIMARY KEY constraint 't.t_k_pkey' violated)
DELETE FROM T; -- reset table
```
- Test using bash + `mclient`:
- Create the `DOTMONETDBFILE`:
```bash
cat <<EOF >> .monetdb
user=monetdb
password=monetdb
EOF
```
- Run concurrent inserts (make sure to run the second command immediatly after the first, e.g., by pasting the entire input in the terminal):
```bash
for i in {1..1000}; do DOTMONETDBFILE=.monetdb mclient -d testdb -s "INSERT INTO T VALUES ($i, $i)"; done &
for i in {1..1000}; do DOTMONETDBFILE=.monetdb mclient -d testdb -s "INSERT INTO T VALUES ($i, $i)"; done
;
```
- Check the duplicate values in the table:
```sql
SELECT count(*) FROM T;
Returns (the actual value may vary):
+------+
| %1 |
+======+
| 1917 |
+------+
SELECT * FROM T LIMIT 10;
Returns:
+------+------+
| k | v |
+======+======+
| 1 | 1 |
| 1 | 1 |
| 2 | 2 |
| 2 | 2 |
| 3 | 3 |
| 3 | 3 |
| 4 | 4 |
| 4 | 4 |
| 5 | 5 |
| 5 | 5 |
+------+------+
```
- Running the script again will result in just rollbacks, so it only affects concurrent executions.
- Test using Python with `pyodbc`:
```python
import pyodbc
from multiprocessing import Pool
def insert(_):
conn = pyodbc.connect(f"DRIVER={{/usr/lib/x86_64-linux-gnu/libMonetODBC.so}};HOST=localhost;PORT=50000;DATABASE=testdb;UID=monetdb;PWD=monetdb")
conn.autocommit = True
cursor = conn.cursor()
for i in range(1000):
try:
cursor.execute(f'INSERT INTO T values(?, ?)', i, i)
except:
pass
return True
pool = Pool(2)
pool.map(insert, range(2))
```
**Expected behavior**
No rows with a duplicate primary key.
**Software versions**
- `monetdb -v`: `MonetDB Database Server Toolkit v11.41.11 (Jul2021-SP1)`.
- OS: `Ubuntu 20.04.3 LTS`;
- Monetdb installed from release packages with `apt` (packages `monetdb5-sql` and `monetdb-client`);
- `mclient -v`: `mclient, the MonetDB interactive terminal, version 11.41.11 (Jul2021-SP1)`
- ODBC driver installed with `apt` packages `unixodbc`, `unixodbc-dev`, and `libmonetdb-client-odbc`;
- `pyodbc==4.0.32`.
**Additional context**
The problem is visible with both `autocommit=True` and `autocommit=False`.
| PRIMARY KEY unique constraint is violated with concurrent inserts | https://api.github.com/repos/MonetDB/MonetDB/issues/7200/comments | 1 | 2021-11-22T16:12:31Z | 2024-06-27T13:16:37Z | https://github.com/MonetDB/MonetDB/issues/7200 | 1,060,339,451 | 7,200 |
[
"MonetDB",
"MonetDB"
] | I get a very high disk write and I/O write operations after enabling query history (even if using very high threshold so no queries are logged to the history table), using:
```
call sys.querylog_enable(10000);
```
I have a constant rate of inserts to the DB (around 30 records/s). This generates around 1.5 MB/s write about 40 I/O write operation per second to the main storage.
The moment I enabled the query history log (even with a high threshold) I get ~100MB/s write rate and ~1000 I/O write operations per second to the storage all the time.
This makes the feature not very useful in my use-case as it will have a significant impact on performance. I was also hoping to use it as a "slow query log" by giving a relatively high threshold value.
I had a look at system call count (with `strace`) and with the log disabled a typical 10 second work load looks like:
```
72012 clock_gettime
6063 poll
3852 futex
1074 write
802 read
728 rt_sigprocmask
364 clone
18 fdatasync
10 lseek
4 mremap
```
but with it enabled it looks like:
```
57100 clock_gettime
23442 write
23336 read
4923 poll
3318 futex
624 rt_sigprocmask
312 clone
42 rename
42 close
35 lseek
28 openat
28 open
28 munmap
28 mmap
28 getdents
28 fstat
27 fdatasync
14 unlink
14 rmdir
14 mkdir
14 fcntl
2 mremap
```
Looking at the opens and write what it does is:
```
openat(AT_FDCWD, "/var/lib/monetdb/data/default/logs/bat/DELETE_ME", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 30
getdents(30, /* 3 entries */, 32768) = 80
unlink("/var/lib/monetdb/data/default/logs/bat/DELETE_ME/BBP.dir") = 0
getdents(30, /* 0 entries */, 32768) = 0
close(30) = 0
rmdir("/var/lib/monetdb/data/default/logs/bat/DELETE_ME") = 0
clock_gettime(CLOCK_REALTIME, {1637340440, 451079009}) = 0
rename("/var/lib/monetdb/data/default/logs/bat/BBP.dir", "/var/lib/monetdb/data/default/logs/bat/BACKUP/BBP.dir") = 0
```
and
```
openat(AT_FDCWD, "/var/lib/monetdb/data/default/logs/bat/BACKUP/SUBCOMMIT", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = -1 ENOENT (No such file o
r directory)
mkdir("/var/lib/monetdb/data/default/logs/bat/BACKUP/SUBCOMMIT", 0777) = 0
clock_gettime(CLOCK_REALTIME, {1637340441, 481343818}) = 0
rename("/var/lib/monetdb/data/default/logs/bat/BACKUP/BBP.dir", "/var/lib/monetdb/data/default/logs/bat/BACKUP/SUBCOMMIT/BBP.dir") = 0
open("/var/lib/monetdb/data/default/logs/bat/BBP.dir", O_WRONLY|O_CREAT|O_CLOEXEC, 0666) = 30
fcntl(30, F_GETFL) = 0x8001 (flags O_WRONLY|O_LARGEFILE)
open("/var/lib/monetdb/data/default/logs/bat/BACKUP/SUBCOMMIT/BBP.dir", O_RDONLY) = 31
fstat(31, {st_mode=S_IFREG|0600, st_size=6583714, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7f931b1000
read(31, "BBP.dir, GDKversion 25124\n8 8 16"..., 4096) = 4096
fstat(30, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7f931b0000
read(31, "4096 0 9223372036854775808 92233"..., 4096) = 4096
write(30, "BBP.dir, GDKversion 25124\n8 8 16"..., 4096) = 4096
read(31, "0 1153 0 0 0 0 92233720368547758"..., 4096) = 4096
write(30, "4096 0 9223372036854775808 92233"..., 4096) = 4096
read(31, " 1048576 0 str 2 1 0 0 0 0 0 922"..., 4096) = 4096
write(30, "0 1153 0 0 0 0 92233720368547758"..., 4096) = 4096
read(31, "036854775808 11371 16384 0\n171 3"..., 4096) = 4096
write(30, " 1048576 0 str 2 1 0 0 0 0 0 922"..., 4096) = 4096
read(31, "362 40448 0 int 4 0 0 0 0 0 0 92"..., 4096) = 4096
write(30, "036854775808 11371 16384 0\n171 3"..., 4096) = 4096
read(31, "5808 0 2048 0 922337203685477580"..., 4096) = 4096
write(30, "362 40448 0 int 4 0 0 0 0 0 0 92"..., 4096) = 4096
read(31, "0 0 9223372036854775808 2 1024 0"..., 4096) = 4096
write(30, "5808 0 2048 0 922337203685477580"..., 4096) = 4096
read(31, "4 32 tmp_460 04/460 2 10 1024 0 "..., 4096) = 4096
write(30, "0 0 9223372036854775808 2 1024 0"..., 4096) = 4096
read(31, " 1490871 1572864 0 str 4 1 2048 "..., 4096) = 4096
write(30, "4 32 tmp_460 04/460 2 10 1024 0 "..., 4096) = 4096
```
and many more writes.
So this looks like it is re-writing (updating?) the `bat/BBP.dir` like 3 times per second with the query log feature enabled.
That would explain the I/O load as the size of the BBP.dir file is 6.3M on my system.
This is on MonetDB Jul2021-SP1 release. | High disk write and I/O after enabling Query History | https://api.github.com/repos/MonetDB/MonetDB/issues/7199/comments | 1 | 2021-11-19T17:10:24Z | 2024-06-27T13:16:36Z | https://github.com/MonetDB/MonetDB/issues/7199 | 1,058,744,817 | 7,199 |
[
"MonetDB",
"MonetDB"
] | The query:
```
select
count(*) as count
from logs.http_access
where processed_timestamp >= '2021-11-07' and processed_timestamp < '2021-11-10'
and timestamp >= '2021-11-08 15:40:28.998' and timestamp <= '2021-11-08 15:55:28.999'
and logsource_environment = 'prod' and logsource_service = 'varnish' and request_url_path <> '/varnish_core_health.test'
and request_url_path <> '/varnish_health.test' and json.filter(varnish_client_log, '$.checkpoint_validated') <> '[]'
```
takes 3:40 minutes in my tests where on MonetDB Apr2019-SP1 it would take ~50ms.
If I remove one of the `<>` or predicates or change it to `=` the query would execute in ~50ms as expected:
```
select
count(*) as count
from logs.http_access
where processed_timestamp >= '2021-11-07' and processed_timestamp < '2021-11-10'
and timestamp >= '2021-11-08 15:40:28.998' and timestamp <= '2021-11-08 15:55:28.999'
and request_url_path <> '/varnish_core_health.test'
and json.filter(varnish_client_log, '$.checkpoint_validated') <> '[]'
```
Or
```
select
count(*) as count
from logs.http_access
where processed_timestamp >= '2021-11-07' and processed_timestamp < '2021-11-10'
and timestamp >= '2021-11-08 15:40:28.998' and timestamp <= '2021-11-08 15:55:28.999'
and request_url_path = '/varnish_core_health.test' and request_url_path <> '/varnish_health.test'
and json.filter(varnish_client_log, '$.checkpoint_validated') <> '[]'
```
Looks like in the bad case each partition is scanned with the JSON predicate:
```
-- C_35:bat[:oid] := algebra.select(X_8:bat[:timestamp], C_5:bat[:oid], "2021-11-07 00:00:00.000000":timestamp, "2021-11-10 00:00:00.000000":timestamp, true:bit, false:bit, false:bit, true:bit);
-- C_43:bat[:oid] := algebra.select(X_15:bat[:timestamp], C_35:bat[:oid], "2021-11-08 15:40:28.998000":timestamp, "2021-11-08 15:55:28.999000":timestamp, true:bit, true:bit, false:bit, true:bit);
-- C_52:bat[:oid] := algebra.thetaselect(X_46:bat[:json], C_43:bat[:oid], ""[]"":json, "!=":str);
-- C_56:bat[:oid] := algebra.thetaselect(X_20:bat[:str], C_52:bat[:oid], "/varnish_core_health.test":str, "!=":str);
-- C_59:bat[:oid] := algebra.thetaselect(X_20:bat[:str], C_56:bat[:oid], "/varnish_health.test":str, "!=":str);
```
While in the good case each partition is only scanned with the string comparisons and the JSON predicate is done on the concatenation of the results:
```
-- C_35:bat[:oid] := algebra.select(X_8:bat[:timestamp], C_5:bat[:oid], "2021-11-07 00:00:00.000000":timestamp, "2021-11-10 00:00:00.000000":timestamp, true:bit, false:bit, false:bit, true:bit);
-- C_43:bat[:oid] := algebra.select(X_15:bat[:timestamp], C_35:bat[:oid], "2021-11-08 15:40:28.998000":timestamp, "2021-11-08 15:55:28.999000":timestamp, true:bit, true:bit, false:bit, true:bit);
-- C_46:bat[:oid] := algebra.thetaselect(X_20:bat[:str], C_43:bat[:oid], "/varnish_core_health.test":str, "==":str);
-- C_50:bat[:oid] := algebra.thetaselect(X_20:bat[:str], C_46:bat[:oid], "/varnish_health.test":str, "==":str);
```
Later in the plan:
```
-- X_2296:bat[:json] := bat.append(X_2294:bat[:json], X_2232:bat[:json], true:bit);
-- X_2298:bat[:json] := bat.append(X_2296:bat[:json], X_2282:bat[:json], true:bit);
-- X_2301:bat[:json] := mal.manifold("json":str, "filter":str, X_2298:bat[:json], "$.checkpoint_validated":str);
-- C_2307:bat[:oid] := algebra.thetaselect(X_2301:bat[:json], nil:BAT, ""[]"":json, "!=":str);
```
Ideally all the plans would be doing `json.filter` on the concatenation of the partition filters no matter if what other string predicates are present.
This is on MonetDB Jul2021-SP1 and is a regression from Apr2019-SP1 on which the query was fast.
The `logs.http_access` is a merge table of daily partitions. | Suboptimal query plan for query containing JSON access filter and two negative string comparisons | https://api.github.com/repos/MonetDB/MonetDB/issues/7198/comments | 3 | 2021-11-19T12:31:45Z | 2024-06-27T13:16:35Z | https://github.com/MonetDB/MonetDB/issues/7198 | 1,058,480,256 | 7,198 |
[
"MonetDB",
"MonetDB"
] | The database prefers to convert column data to match the query filter type instead of the way around for timestamps.
I filter the TIMESTAMP (SQL date and time) column `processed_timestamp` with timestamp calculated using `interval` statement. The resulting filter type is a UNIX timestamp and does not match the SQL date and time column type. The engine adds a cast from SQL date and time to UNIX timestamp for each column data entry which is very slow compared to just casting the UNIX timestamp to SQL date and time filter constant.
```
-- took: 463ms
-- X_8:bat[:timestamp] := sql.bind(X_4:int, "logs":str, "http_access_past":str, "processed_timestamp":str, 0:int);
-- X_15:bat[:timestamp] := batcalc.timestamp(X_8:bat[:timestamp], C_5:bat[:oid], 7:int);
-- C_28:bat[:oid] := algebra.select(X_15:bat[:timestamp], "2021-11-10 14:48:19.393693":timestamp, "2021-11-10 14:54:59.393705":timestamp, true:bit, true:bit, false:bit, true:bit);
explain
select count(*)
from logs.http_access
where processed_timestamp BETWEEN now() - interval '110' second - interval '300' second
and now() - interval '10' second
```
Notice the `timestamp_2time_timestamp` calls during execution:
```
Overhead Shared Object Symbol
56.06% libmonetdbsql.so.11.41.11 [.] __divti3
21.65% libmonetdbsql.so.11.41.11 [.] timestamp_2time_timestamp
5.86% libbat.so.23.0.3 [.] timestamp_create
4.29% libbat.so.23.0.3 [.] timestamp_date
4.18% libbat.so.23.0.3 [.] timestamp_daytime
2.30% libmonetdbsql.so.11.41.11 [.] timestamp_date@plt
1.15% libmonetdbsql.so.11.41.11 [.] timestamp_create@plt
1.05% libbat.so.23.0.3 [.] BATordered
0.36% libmonetdbsql.so.11.41.11 [.] timestamp_daytime@plt
```
The workaround is to manually cast the result of the interval operation to TIMESTAMP - this make my query to run 10x faster:
```
-- took: 40 ms
-- X_8:bat[:timestamp] := sql.bind(X_4:int, "logs":str, "http_access_past":str, "processed_timestamp":str, 0:int);
-- C_29:bat[:oid] := algebra.select(X_8:bat[:timestamp], C_5:bat[:oid], "2021-11-19 10:42:12.604285":timestamp, "2021-11-19 10:48:52.604301":timestamp, true:bit, true:bit, false:bit, true:bit);
explain
select count(*)
from logs.http_access
where processed_timestamp BETWEEN CAST(now() - interval '110' second - interval '300' second AS TIMESTAMP)
and CAST(now() - interval '10' second AS TIMESTAMP)
```
Ideally the database would add the cast automatically for the constant that the table is filtered by instead of casting all the column data.
This is on MonetDB Jul2021-SP1 release.
I have ended up rewriting my queries due to stricter SQL standard requirements introduced with the latest releases (I was using integer subtraction before) and I have noticed this problem. I have migrated from Apr2019-SP1 release. | Cast query constant instead of column data for incompatible timestamp types | https://api.github.com/repos/MonetDB/MonetDB/issues/7197/comments | 5 | 2021-11-19T11:11:03Z | 2021-11-19T17:17:38Z | https://github.com/MonetDB/MonetDB/issues/7197 | 1,058,413,315 | 7,197 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
Heyo, I was trying to test some things to make our analytics stuff more stable. Basically I create a table, start a transaction on 2 clients, one alters the table to add a new column, the other client inserts some data. Instead of rolling back it ends up corrupting the table entirely it seems like?
I've read over the optimistic concurrency control so I'm not sure if this is *actually* a bug or expected behavior here.
**To Reproduce**
Client 1
```
sql>create table test (id bigint);
operation successful
sql>select * from test;
+----+
| id |
+====+
+----+
0 tuples
sql>start transaction;
auto commit mode: off
sql>alter table test add column data int; # To be clear this is run BEFORE the insert in the second client.
operation successful
sql>commit;
auto commit mode: on
sql>select * from test;
GDK reported error: BATproject2: does not match always
```
Client 2
```
sql>select * from test;
+----+
| id |
+====+
+----+
0 tuples
sql>start transaction;
auto commit mode: off
sql>insert into test values (1); # This is run AFTER the alter
1 affected row
sql>commit;
auto commit mode: on
sql>select * from test;
GDK reported error: BATproject2: does not match always
```
**Expected behavior**
I know this is a violation of the OCC rules on concurrent connections but wasn't expecting the table to not be recoverable afterwards.
**Screenshots**
**Software versions**
MonetDB: MonetDB Database Server Toolkit v11.41.11 (Jul2021-SP1)
OS: Linux 839c464b9757 5.10.0-8-cloud-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 x86_64 x86_64 GNU/Linux
Using the docker image here: https://hub.docker.com/r/monetdb/monetdb
**Issue labeling **
Table corruption, concurrent transactions,
**Additional context**
| BATproject2: does not match always | https://api.github.com/repos/MonetDB/MonetDB/issues/7196/comments | 6 | 2021-11-18T02:41:22Z | 2024-06-27T13:16:34Z | https://github.com/MonetDB/MonetDB/issues/7196 | 1,056,870,192 | 7,196 |
[
"MonetDB",
"MonetDB"
] | According to the documentation of monetdb/numpy pdfs, a UDF is mappable only when python_map language is defined. Otherwise is considered as a black box. According to an example it is ok to have a scalar function like this:
CREATE FUNCTION python_min(i INTEGER) RETURNS integer LANGUAGE PYTHON {
return numpy.min(i)
};
which returns one single value in the result and expect it not to be mappable.
However, we have encountered a case that this UDF runs in parallel and returns multiple rows in the output.
If we have a merge table, with two partition tables, and run the above UDF without defining it as PYTHON_MAP, then this is mapped to the partitions and returns 2 values.
| Python UDFs are mapped even if they are not mappable. | https://api.github.com/repos/MonetDB/MonetDB/issues/7195/comments | 2 | 2021-11-10T11:27:41Z | 2023-09-13T14:55:09Z | https://github.com/MonetDB/MonetDB/issues/7195 | 1,049,708,059 | 7,195 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
Triggers don't work after server restart
**To Reproduce**
Just create a trigger on a table and restart the server.
**Expected behavior**
The trigger should run.
**Software versions**
- MonetDB version number: Jul2021
- OS and version: Linux
| Triggers don't work after server restart | https://api.github.com/repos/MonetDB/MonetDB/issues/7194/comments | 0 | 2021-11-05T09:35:40Z | 2021-11-05T09:43:38Z | https://github.com/MonetDB/MonetDB/issues/7194 | 1,045,629,009 | 7,194 |
[
"MonetDB",
"MonetDB"
] | I am trying to downsample and upsample time series data. Time series database systems (TSDS) usually have an option to make the downsampling and upsampling with an operator like SAMPLE BY (1h).
Is there a possibility to downsample time series on MonetDB?
Thanks in advance! | Time Series Downsampling/Upsampling | https://api.github.com/repos/MonetDB/MonetDB/issues/7193/comments | 8 | 2021-11-04T21:51:55Z | 2022-08-18T18:39:04Z | https://github.com/MonetDB/MonetDB/issues/7193 | 1,045,244,282 | 7,193 |
[
"MonetDB",
"MonetDB"
] | Possibly after a failure due to insufficient disk space, a hash table seems to be in an inconsistent state.
A range select (using imprints) returns 9 results, while the point select (using hash) returns 0 results:
```
sql>\d cacheinfo
CREATE TABLE "spinque"."cacheinfo" (
"cacheid" INTEGER,
"attribute" CHARACTER LARGE OBJECT,
"value" CHARACTER LARGE OBJECT
);
sql>select * from sys.storage() where table='cacheinfo' and column='cacheid';
+---------+-----------+---------+------+----------+----------+--------+-----------+------------+----------+--------+-------+----------+--------+-----------+--------+----------+
| schema | table | column | type | mode | location | count | typewidth | columnsize | heapsize | hashes | phash | imprints | sorted | revsorted | unique | orderidx |
+=========+===========+=========+======+==========+==========+========+===========+============+==========+========+=======+==========+========+===========+========+==========+
| spinque | cacheinfo | cacheid | int | writable | 10/1055 | 101045 | 4 | 404180 | 0 | 463308 | false | 2868 | false | false | false | 0 |
+---------+-----------+---------+------+----------+----------+--------+-----------+------------+----------+--------+-------+----------+--------+-----------+--------+----------+
1 tuple
sql>select cacheid from cacheinfo where cacheid > 519 and cacheid < 521;
+---------+
| cacheid |
+=========+
| 520 |
| 520 |
| 520 |
| 520 |
| 520 |
| 520 |
| 520 |
| 520 |
| 520 |
+---------+
9 tuples
sql>select cacheid from cacheinfo where cacheid = 520;
+---------+
| cacheid |
+=========+
+---------+
0 tuples
```
According to `sys.storage()`, the hash is not persistent, so I tried to stop/start the database to see if it would drop it. Nothing changes.
I also tried to `analyze` the column, but nothing changes.
This is not easily reproducible, but also not urgent to me. I'm reporting it hoping that it can give a clue as to where a problem with hashing might be. It looks to me that it is not robust against a failure.
**Software versions**
- MonetDB version number v11.39.18
- OS and version: FC32
- self-installed and compiled
| Equality selection gives wrong result because of corrupt hash table | https://api.github.com/repos/MonetDB/MonetDB/issues/7192/comments | 0 | 2021-11-02T13:06:50Z | 2021-11-02T13:06:50Z | https://github.com/MonetDB/MonetDB/issues/7192 | 1,042,321,490 | 7,192 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
On MonetDBe, if a Prepared Statement is created with variable-sized types (such as String) as input and a NULL value is bound, closing the Prepared Statement (with `monetdbe_cleanup_statement()`) leads to:
`Assertion failed: (((char *) s)[i] == '\xBD'), function GDKfree`
**To Reproduce**
This example uses MonetDBe-Java:
```
Connection c = DriverManager.getConnection("jdbc:monetdb:memory:", new Properties());
Statement s = c.createStatement();
s.executeUpdate("CREATE TABLE p (i STRING)");
PreparedStatement ps = c.prepareStatement("INSERT INTO p VALUES (?);");
ps.setNull(1,Types.VARCHAR);
ps.execute();
c.close();
```
When the PreparedStatement ps is closed, the assertion failed is thrown. If we close the PreparedStatement with a non-NULL value bound to the string variable, the assertion failed does not happen.
**Expected behavior**
The PreparedStatement should be closed correctly, so `monetdbe_cleanup_statement()` should not throw this "Assertion failed".
**Software versions**
- MonetDB 5 server 11.42.0 (hg id: 7e9274bbe6de)
- Latest MonetDBe-Java version
- macOS Catalina (10.15.7)
| [MonetDBe] monetdbe_cleanup_statement() with bound NULLs on variable-sized types bug | https://api.github.com/repos/MonetDB/MonetDB/issues/7191/comments | 0 | 2021-11-01T15:37:19Z | 2024-06-27T13:16:32Z | https://github.com/MonetDB/MonetDB/issues/7191 | 1,041,309,161 | 7,191 |
[
"MonetDB",
"MonetDB"
] | **Is your feature request related to a problem? Please describe.**
The absence of Upsert means several queries whereas Upsert means only one query. Also, that can be optimized.
**Describe the solution you'd like**
implementation of an UPSERT clause
**Describe alternatives you've considered**
writing several queries
| Upsert clause support | https://api.github.com/repos/MonetDB/MonetDB/issues/7190/comments | 2 | 2021-10-23T16:56:50Z | 2022-06-18T09:54:59Z | https://github.com/MonetDB/MonetDB/issues/7190 | 1,034,209,513 | 7,190 |
[
"MonetDB",
"MonetDB"
] | When I start monetdb, an error appears (my operating system is Windows10 x64).
Below is the error message:
```
C:\Users\cathyma\Desktop\MonetDB5_new>M5server.bat --dbpath=wyndw --set mapi_port=50000 --set max_clients=200
MonetDB/Python Disabled: Python 3.9 installation not found.
# MonetDB 5 server v11.41.5 (Jul2021)
# Serving database 'demo', using 8 threads
# Compiled for amd64-pc-windows-msvc/64bit
# Found 15.692 GiB available main-memory of which we use 12.789 GiB
# Copyright (c) 1993 - July 2008 CWI.
# Copyright (c) August 2008 - 2021 MonetDB B.V., all rights reserved
# Visit https://www.monetdb.org/ for further information
#2021-10-22 12:07:23: main thread: createExceptionInternal: !ERROR: LoaderException:loadLibrary:Loading error failed to open library geom (from within file 'C:\Users\cathyma\Desktop\MonetDB5_new\lib\monetdb5\_geom.dll'): The specified module could not be found.
#2021-10-22 12:07:23: main thread: mal_init: !ERROR: LoaderException:loadLibrary:Loading error failed to open library geom (from within file 'C:\Users\cathyma\Desktop\MonetDB5_new\lib\monetdb5\_geom.dll'): The specified module could not be found.
```
| MonetDB startup error | https://api.github.com/repos/MonetDB/MonetDB/issues/7189/comments | 8 | 2021-10-22T04:15:42Z | 2021-10-27T07:08:19Z | https://github.com/MonetDB/MonetDB/issues/7189 | 1,033,160,882 | 7,189 |
[
"MonetDB",
"MonetDB"
] | Hi, when I installing the MonetDB on Ubuntu 1804 follow the instructions on [this page](https://www.monetdb.org/downloads/deb/), the error message shows that the certificate has expired,
> Err:10 https://dev.monetdb.org/downloads/deb bionic Release
Certificate verification failed: The certificate is NOT trusted. The certificate chain uses expired certificate. Could not handshake: Error in the certificate verification. [IP: 192.16.197.137 443]
Reading package lists... Done
E: The repository 'https://dev.monetdb.org/downloads/deb bionic Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
Did I miss something? | The certificate has expired? | https://api.github.com/repos/MonetDB/MonetDB/issues/7188/comments | 1 | 2021-10-18T03:53:30Z | 2021-10-19T06:35:31Z | https://github.com/MonetDB/MonetDB/issues/7188 | 1,028,591,170 | 7,188 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
As of Jul2021, the WAL log is constantly rotated, so the functions to force a log flush have been removed from Jul2021.
This page should be updated: https://www.monetdb.org/Documentation/SQLReference/SystemProcedures
| Outdated documentation of SystemProcedures | https://api.github.com/repos/MonetDB/MonetDB/issues/7187/comments | 1 | 2021-10-14T18:32:25Z | 2024-06-27T13:16:31Z | https://github.com/MonetDB/MonetDB/issues/7187 | 1,026,697,887 | 7,187 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I expect that data exports created using COPY SELECT .. INTO 'file.csv' ...
would be usable to restore/copy the data again using COPY 'file.csv' INTO ..
However for certain data (in this case all data from sys.tables) this fails when the delimiter is a pipe character '|' or a comma character ','.
When I look at the generated data files they seem to be correct.
However the import does not handle the string data inside double quoted delimiters "" correctly, as it fails on the || characters (in the .psv file) or comma character (in the .csv file).
**To Reproduce**
Start mserver5
-# builtin opt gdk_dbpath = /home/dinther/dev/INSTALL/var/monetdb5/dbfarm/demo
-# builtin opt mapi_port = 50000
-# builtin opt sql_optimizer = default_pipe
-# builtin opt sql_debug = 0
-# builtin opt raw_strings = false
-# cmdline opt embedded_r = true
-# cmdline opt embedded_py = 3
-# cmdline opt embedded_c = true
-# cmdline opt mapi_port = 41000
-# MonetDB 5 server v11.42.0 (hg id: 38bf565fc89a)
-# This is an unreleased version
-# Serving database 'demo', using 8 threads
-# Compiled for x86_64-pc-linux-gnu/64bit with 128bit integers
-# Found 31.233 GiB available main-memory of which we use 25.455 GiB
-# Copyright (c) 1993 - July 2008 CWI.
-# Copyright (c) August 2008 - 2021 MonetDB B.V., all rights reserved
-# Visit https://www.monetdb.org/ for further information
-# Listening for connection requests on mapi:monetdb://localhost:41000/
-# MonetDB/GIS module loaded
-# MonetDB/R module loaded
-# MonetDB/Python3 module loaded
-# MonetDB/SQL module loaded
Start mclient
Welcome to mclient, the MonetDB/SQL interactive terminal (unreleased)
Database: MonetDB v11.42.0 (hg id: 38bf565fc89a), 'demo'
FOLLOW US on https://twitter.com/MonetDB or https://github.com/MonetDB/MonetDB
Type \q to quit, \? for a list of available commands
auto commit mode: on
sql>CREATE TABLE t AS SELECT * FROM tables WITH NO DATA;
operation successful
sql>\d t
CREATE TABLE "sys"."t" (
"id" INTEGER,
"name" VARCHAR(1024),
"schema_id" INTEGER,
"query" VARCHAR(1048576),
"type" SMALLINT,
"system" BOOLEAN,
"commit_action" SMALLINT,
"access" SMALLINT,
"temporary" TINYINT
);
sql>select count(*) from t;
+------+
| %1 |
+======+
| 0 |
+------+
1 tuple
sql>COPY SELECT * FROM tables INTO 'csvfiles/tables.psv' ON CLIENT DELIMITERS '|';
sql>COPY SELECT * FROM tables INTO 'csvfiles/tables.csv' ON CLIENT DELIMITERS ',';
sql>COPY SELECT * FROM tables INTO 'csvfiles/tables.tsv' ON CLIENT DELIMITERS '\t';
sql>COPY INTO t FROM 'csvfiles/tables.psv' ON CLIENT DELIMITERS '|';
Failed to import table 't', line 105: column 11: Leftover data 'true|0|0|0'
sql>COPY INTO t FROM 'csvfiles/tables.psv' ON CLIENT DELIMITERS '|';
Failed to import table 't', line 85: column 11: Leftover data '\n sys.describe_type(c.type, c.type_digits, c.type_scale) ||\n ifthenelse(c.\"null\" = 'false', ' NOT NULL', '')\n , ', ') || ')'\n from sys._columns c\n where c.table_id = t.id) col,\n case ts.table_type_name\n when 'REMOTE TABLE' then\n sys.get_remote_table_expressions(s.name, t.name)\n when 'MERGE TABLE' then\n sys.get_merge_table_partition_expressions(t.id)\n when 'VIEW' then\n sys.schema_guard(s.name, t.name, t.query)\n else\n ''\n end opt\n from sys.schemas s, sys.table_types ts, sys.tables t\n where ts.table_type_name in ('TABLE', 'VIEW', 'MERGE TABLE', 'REMOTE TABLE', 'REPLICA TABLE')\n and t.system = false\n and s.id = t.schema_id\n and ts.table_type_id = t.type\n and s.name <> 'tmp';"|11|true|0|0|0'
sql>COPY INTO t FROM 'csvfiles/tables.psv' ON CLIENT DELIMITERS '|';
Failed to import table 't', line 43: column 11: Leftover data ' ' arg' as varchar(44)), 'sys.args', f.system from sys.args a join sys.functions f on a.func_id = f.id left outer join sys.function_types ft on f.type = ft.function_type_id union all\nselect id, name, schema_id, cast(null as int) as table_id, cast(null as varchar(124)) as table_name, 'sequence', 'sys.sequences', false from sys.sequences union all\nselect o.id, o.name, pt.schema_id, pt.id, pt.name, 'partition of merge table', 'sys.objects', false from sys.objects o join sys._tables pt on o.sub = pt.id join sys._tables mt on o.nr = mt.id where mt.type = 3 union all\nselect id, sqlname, schema_id, cast(null as int) as table_id, cast(null as varchar(124)) as table_name, 'type', 'sys.types', (sqlname in ('inet','json','url','uuid')) from sys.types where id > 2000\n order by id;"|11|true|0|0|0'
sql>COPY INTO t FROM 'csvfiles/tables.psv' ON CLIENT DELIMITERS '|';
Failed to import table 't', line 115: column 11: Leftover data ' sys.dq(col) || ' SET DEFAULT ' || def || ';' stmt,\n sch schema_name,\n tbl table_name,\n col column_name\n from sys.describe_column_defaults;"|11|true|0|0|0'
sql>select count(*) from t;
+------+
| %1 |
+======+
| 0 |
+------+
1 tuple
sql>
sql>COPY INTO t FROM 'csvfiles/tables.csv' ON CLIENT DELIMITERS ',';
Failed to import table 't', line 41: column 11: Leftover data ' ql.ship, ql.cpu, ql.io\nfrom sys.querylog_catalog() qd, sys.querylog_calls() ql\nwhere qd.id = ql.id and qd.owner = user;",11,true,0,0,0'
sql>COPY INTO t FROM 'csvfiles/tables.csv' ON CLIENT DELIMITERS ',';
Failed to import table 't', line 41: column 11: Leftover data ' ql.ship, ql.cpu, ql.io\nfrom sys.querylog_catalog() qd, sys.querylog_calls() ql\nwhere qd.id = ql.id and qd.owner = user;",11,true,0,0,0'
sql>COPY INTO t FROM 'csvfiles/tables.csv' ON CLIENT DELIMITERS ',';
Failed to import table 't', line 58: column 11: Leftover data ' c.name as column_name, dep.depend_type as depend_type\n from sys.tables as t, sys.columns as c, sys.triggers as tri, sys.dependencies as dep\n where dep.id = c.id and dep.depend_id = tri.id and c.table_id = t.id\n and dep.depend_type = 8\n order by t.schema_id, t.name, tri.name, c.name;",11,true,0,0,0'
sql>COPY INTO t FROM 'csvfiles/tables.csv' ON CLIENT DELIMITERS ',';
Failed to import table 't', line 22: column 11: Leftover data ' \"commit_action\", \"access\", CASE WHEN (NOT \"system\" AND \"commit_action\" > 0) THEN 1 ELSE 0 END AS \"temporary\" FROM \"sys\".\"_tables\" WHERE \"type\" <> 2 UNION ALL SELECT \"id\", \"name\", \"schema_id\", \"query\", CAST(\"type\" + 30 /* local temp table */ AS SMALLINT) AS \"type\", \"system\", \"commit_action\", \"access\", 1 AS \"temporary\" FROM \"tmp\".\"_tables\";",11,true,0,0,0'
sql>select count(*) from t;
+------+
| %1 |
+======+
| 0 |
+------+
1 tuple
sql>
sql>COPY INTO t FROM 'csvfiles/tables.tsv' ON CLIENT DELIMITERS '\t';
129 affected rows
sql>COPY INTO t FROM 'csvfiles/tables.tsv' ON CLIENT DELIMITERS '\t';
129 affected rows
sql>select count(*) from t;
+------+
| %1 |
+======+
| 258 |
+------+
1 tuple
sql>COPY INTO t FROM 'csvfiles/tables.tsv' ON CLIENT DELIMITERS '\t';
129 affected rows
sql>select count(*) from t;
+------+
| %1 |
+======+
| 387 |
+------+
1 tuple
sql>truncate t;
387 affected rows
sql>select count(*) from t;
+------+
| %1 |
+======+
| 0 |
+------+
1 tuple
sql>drop table t;
operation successful
sql>\d
sql>
**Expected behavior**
The import of created files tables.psv and tables.csv succeed without errors (like is the case with tables.tsv).
Also the import does not return the same error for the data file when it is executed multiple consecutive times.
This looks like mserver5 randomly processes the data file (or is this due to parallel processing the data file).
At least to the user it is not consistent and therefore confusing.
**Software versions**
I tested it using the compiled build from default branch on Fedora .
However this problem is probably also reproducable with Jul2021 or Jul2021-SP1 release
| data files created with COPY SELECT .. INTO 'file.csv' fail to be loaded using COPY INTO .. FROM 'file.csv' when double quoted string data contains the field values delimiter character | https://api.github.com/repos/MonetDB/MonetDB/issues/7186/comments | 4 | 2021-10-14T13:58:00Z | 2024-06-27T13:16:30Z | https://github.com/MonetDB/MonetDB/issues/7186 | 1,026,434,709 | 7,186 |
[
"MonetDB",
"MonetDB"
] | **To Reproduce**
```sql
create table students (course TEXT, type TEXT);
insert into students
(course, type)
values
('CS', 'Bachelor'),
('CS', 'Bachelor'),
('CS', 'PhD'),
('Math', 'Masters'),
('CS', NULL),
('CS', NULL),
('Math', NULL);
-- without aliases the query works as expected
select count(*), course, type
from students
group by grouping sets((course, type), (type))
order by 1, 2, 3;
-- with aliases no result is returned
select count(*), course AS crs, type AS tp
from students
group by grouping sets((crs, tp), (tp))
order by 1, 2, 3;
```
**Expected behavior**
Expect the following result from both queries:
```
+------+--------+----------+
| %1 | course | type |
+======+========+==========+
| 1 | null | Masters |
| 1 | null | PhD |
| 1 | CS | PhD |
| 1 | Math | null |
| 1 | Math | Masters |
| 2 | null | Bachelor |
| 2 | CS | null |
| 2 | CS | Bachelor |
| 3 | null | null |
+------+--------+----------+
```
**Software versions**
- MonetDB v11.41.5 (Jul2021)
- MacOS
- Homebrew
| GROUPING SETS on groups with aliases provided in the SELECT returns empty result | https://api.github.com/repos/MonetDB/MonetDB/issues/7185/comments | 1 | 2021-10-12T11:42:44Z | 2024-06-27T13:16:29Z | https://github.com/MonetDB/MonetDB/issues/7185 | 1,023,720,285 | 7,185 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
An insert into will block any other query executed afterwards. The query will use only 1 core from the CPU.
**To Reproduce**
* Create a database and open mclient two times
* Create a table as: `CREATE TABLE IF NOT EXISTS alpha (id bigint PRIMARY KEY AUTO_INCREMENT, data bigint); `
* In the first mclient execute `INSERT INTO sys.alpha (data) select * from generate_series(0,100000000,cast (1 as bigint));` and **immediately** in the second mclient execute `SELECT 1;`
**Expected behavior**
Another query can be executed in parallel with an insert into query.
**Software versions**
- MonetDB Jul2021 v11.41.5
- OS and version: Ubuntu 20.04
| Insert into query blocks all other queries | https://api.github.com/repos/MonetDB/MonetDB/issues/7184/comments | 10 | 2021-10-12T07:40:50Z | 2024-06-27T13:16:28Z | https://github.com/MonetDB/MonetDB/issues/7184 | 1,023,474,328 | 7,184 |
[
"MonetDB",
"MonetDB"
] | I noticed that there are not any .deb packages of version 11.41.x provided for Ubuntu 1604, I'd like to know do you have the plan for this? Or is there an instruction that describes how to build the .deb packages of version 11.41.x for Ubuntu 1604?
| Do you have any plans to build the .deb packages of version 11.41.x for Ubuntu 1604? | https://api.github.com/repos/MonetDB/MonetDB/issues/7183/comments | 10 | 2021-10-11T10:19:52Z | 2021-10-28T00:23:04Z | https://github.com/MonetDB/MonetDB/issues/7183 | 1,022,524,524 | 7,183 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
Queries against sys.querylog_catalog or sys.querylog_calls or sys.querylog_history fail with error:
HY013 Could not allocate space
on a demo db which was restored from a db created with
call sys.hot_snapshot(R'\path\file.tar');
Apparently the sys.hot_snapshot() functionality is not 100% complete, it missed some files which cause server errors when the db is restored.
**To Reproduce**
start mserver5 (Jul2021-SP1) on windows
start mclient
in mclient do:
> call sys.hot_snapshot(R'C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo_bu2021_10_07.tar');
stop mclient
stop mserver5
In explorer go to C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\
rename original demo db folder to demo_orig
open demo_bu2021_10_07.tar file (with 7-Zip utility) and drag the demo folder to the dbfarm directory (to copy the demo dir and files to local HD).
Notice the difference in folder sizes (view properties on the folder) between
demo_orig (9,867,264 bytes 239 Files, 10 Folders)
and the restored db from the demo_bu2021_10_07.tar snapshot backup in the demo folder
demo (4,014,080 bytes 186 Files, 7 Folders).
So many files and folders were not included in the snapshot backup !!
start mserver5 (Jul2021-SP1) on windows using demo db
start mclient
in mclient do:
SELECT COUNT(*) AS count FROM sys.querylog_catalog;
SELECT COUNT(*) AS count FROM sys.querylog_calls;
SELECT COUNT(*) AS count FROM sys.querylog_history;
It returns for each query error: HY013 Could not allocate space
stop mclient
start JdbcClient
in JdbcClient do:
\vsci
This special command will validate all the system catalog tables integrity.
It will list the same error for some 12 queries against these 3 system tables.
stop Jdbcclient
The mserver5 console shows:
#2021-10-07 17:41:55: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\53\5303.tail: No such file or directory
#2021-10-07 17:41:55: client1: BBPrename: !ERROR: name is in use: 'querylog_cat_id'.
#2021-10-07 17:41:55: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\36\3631.tail: No such file or directory
#2021-10-07 17:41:55: client1: BBPrename: !ERROR: name is in use: 'querylog_cat_defined'.
#2021-10-07 17:41:55: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\23\2325.tail: No such file or directory
#2021-10-07 17:41:55: client1: BBPrename: !ERROR: name is in use: 'querylog_cat_mal'.
#2021-10-07 17:41:55: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\27\2746.tail: No such file or directory
#2021-10-07 17:41:55: client1: BBPrename: !ERROR: name is in use: 'querylog_cat_optimize'.
#2021-10-07 17:41:55: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\60\6062.tail: No such file or directory
#2021-10-07 17:41:55: client1: BBPrename: !ERROR: name is in use: 'querylog_calls_id'.
#2021-10-07 17:41:56: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\05\513.tail: No such file or directory
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: 'querylog_calls_start'.
#2021-10-07 17:41:56: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\34\3405.tail: No such file or directory
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: 'querylog_calls_stop'.
#2021-10-07 17:41:56: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\46\4624.tail: No such file or directory
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: 'querylog_calls_tuples'.
#2021-10-07 17:41:56: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\12\1272.tail: No such file or directory
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: 'querylog_calls_exec'.
#2021-10-07 17:41:56: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\33\3360.tail: No such file or directory
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: 'querylog_calls_result'.
#2021-10-07 17:41:56: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\31\3143.tail: No such file or directory
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: 'querylog_calls_cpuload'.
#2021-10-07 17:41:56: client1: GDKextend: !ERROR: cannot open file C:\Users\myname\AppData\Roaming\MonetDB5\dbfarm\demo\bat\32\3264.tail: No such file or directory
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: 'querylog_calls_iowait'.
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: '_'.
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: '_'.
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: '_'.
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: '_'.
#2021-10-07 17:41:56: client1: BBPrename: !ERROR: name is in use: '_'.
#2021-10-07 17:41:56: client1: createExceptionInternal: !ERROR: MALException:querylog.init:HY013!Could not allocate space
**Expected behavior**
The system tables should be queryable without errors
The tar file created using hot_snapshot() should include all necessary files and folders to correctly restore a database.
**Software versions**
# MonetDB 5 server v11.41.9 (Jul2021-SP1)
# Serving database 'demo', using 4 threads
# Compiled for amd64-pc-windows-msvc/32bit
# Found 3.240 GiB available main-memory of which we use 1.500 GiB
# Copyright (c) 1993 - July 2008 CWI.
# Copyright (c) August 2008 - 2021 MonetDB B.V., all rights reserved
# Visit https://www.monetdb.org/ for further information
# Listening for connection requests on mapi:monetdb://localhost:50000/
# MonetDB/GIS module loaded
# MonetDB/SQL module loaded
# MonetDB server is started. To stop server press Ctrl-C.
| Queries against sys.querylog_catalog, sys.querylog_calls or sys.querylog_history fail after restore of a db created using call sys.hot_snapshot(R'\path\file.tar'); | https://api.github.com/repos/MonetDB/MonetDB/issues/7182/comments | 1 | 2021-10-07T15:55:06Z | 2024-06-27T13:16:27Z | https://github.com/MonetDB/MonetDB/issues/7182 | 1,020,205,205 | 7,182 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
Our monetdb has been running for a very long time without issues, but suddenly we were not able to connect or log into the database. The logfile says:
```
merovingian.log:
2021-10-04 12:44:54 MSG merovingian[6487]: Merovingian 11.41.5 (Jul2021) starting
2021-10-04 12:44:54 MSG merovingian[6487]: monitoring dbfarm /datadrive/monetdbfarm
2021-10-04 12:44:54 MSG merovingian[6487]: accepting connections on TCP socket :50000
2021-10-04 12:44:54 MSG merovingian[6487]: accepting connections on UNIX domain socket /tmp/.s.monetdb.50000
2021-10-04 12:44:54 MSG discovery[6487]: listening for UDP messages on :50000
2021-10-04 12:44:54 MSG control[6487]: accepting connections on UNIX domain socket /tmp/.s.merovingian.50000
2021-10-04 12:44:54 MSG discovery[6487]: new neighbour myDBmachine (myDBmachine.com)
2021-10-04 12:44:56 MSG discovery[6487]: new database mapi:monetdb://myDBmachine:50000/devdb (ttl=660s)
2021-10-04 12:44:57 MSG merovingian[6487]: database 'devdb' has crashed after start on 2021-10-04 12:35:02, attempting restart, up min/avg/max: 29m/1w/12w, crash average: 1.00 0.80 0.93 (78-15=63)
2021-10-04 12:44:58 MSG devdb[6497]: arguments: /usr/bin/mserver5 --dbpath=/datadrive/monetdbfarm/devdb --set merovingian_uri=mapi:monetdb://myDBmachine:50000/devdb --set mapi_listenaddr=none --set mapi_usock=/datadrive/monetdbfarm/devdb/.mapi.sock --set monet_vault_key=/datadrive/monetdbfarm/devdb/.vaultkey --set gdk_nr_threads=8 --set max_clients=64 --set sql_optimizer=default_pipe
```
After running
` sudo monetdbd start /datadrive/monetdbfarm
`
mserver5 takes 100% cpu:
```
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6497 root 20 0 1839468 1.405g 47876 R 100.0 4.5 14:11.85 mserver5
```
Trying to log into the console using :
`sudo mclient -t performance -u aUsername-d devdb
`
results in a wait forever. No errormessages....
The same also for
`sudo monetdb start devdb
`
i.e. wait forever/ does not return to prompt.
**To Reproduce**
Unable to provide information here.
**Expected behavior**
We expected to be able to login.
**Screenshots**
-None
**Software versions**
Monetdb Jul2021: 11.41.5
Linux: ubuntu 18.04.6 , bionic, x86 64-bit
**Issue labeling **
-None
**Additional context**
The database was originally created using Oct2020 branch, and the issues happened also with the latest Oct2020. We did a sudo apt upgrade and a reboot of the linux machine. The monetdb was upgraded by the apt upgrade, and is hence now Jul2021.
We have several apps using a lot of login/logout, and we wander if there are some login constraints reached or some login recourses that are locked? A reboot of the linux machine does not solve the issue...
Any hints to what to look for, are very much appreciated. | monetdb: unable to start | https://api.github.com/repos/MonetDB/MonetDB/issues/7181/comments | 9 | 2021-10-04T15:41:53Z | 2021-10-07T13:33:43Z | https://github.com/MonetDB/MonetDB/issues/7181 | 1,015,338,857 | 7,181 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
In a query that has a subquery with a GROUP BY-clause in the FROM clause MonetDB server crashes when having another correlated subquery in the SELECT clause.
**To Reproduce**
```
root@dd2a1e8e05d4:~# mclient testdb
Welcome to mclient, the MonetDB/SQL interactive terminal (unreleased)
Database: MonetDB v11.39.18 (hg id: d0fc592188), 'mapi:monetdb://dd2a1e8e05d4:50000/testdb'
FOLLOW US on https://twitter.com/MonetDB or https://github.com/MonetDB/MonetDB
Type \q to quit, \? for a list of available commands
auto commit mode: on
sql>CREATE TABLE table1 (a bigint);
operation successful
sql>CREATE TABLE table2 (a bigint);
operation successful
sql>SELECT (SELECT max(1) FROM table2 WHERE a=tmp.a) FROM (SELECT a FROM table1 GROUP BY a) tmp;
sql>
write error on stream
```
```
2021-09-29 16:43:03 ERR testdb[381]: mserver5: /home/niewerth/git/MonetDB/sql/server/sql_query.c:105: query_outer_used_exp: Assertion `(!is_sql_aggr(f) && sq->grouped == 0 && e->card != CARD_AGGR) || (!is_sql_aggr(f) && sq->grouped == 1 && e->card <= CARD_AGGR) || (is_sql_aggr(f) && !is_sql_farg(f) && !sq->grouped && e->card != CARD_AGGR) || (is_sql_aggr(f) && !is_sql_farg(f) && sq->grouped && e->card != CARD_AGGR) || (is_sql_aggr(f) && is_sql_farg(f) && sq->grouped && e->card <= CARD_AGGR) || (is_sql_aggr(f) && sq->grouped && e->card <= CARD_AGGR)' failed.
2021-09-29 16:43:07 MSG merovingian[34]: database 'testdb' (-1) has crashed with signal SIGABRT (dumped core)
```
**Expected behavior**
MonetDB does not crash
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Software versions**
- MonetDB version number v11.39.18
- OS and version: Ubuntu 20.04
- self-compiled
| GROUP BY-subquery crashes MonetDb | https://api.github.com/repos/MonetDB/MonetDB/issues/7180/comments | 2 | 2021-09-29T14:53:17Z | 2024-06-27T13:16:25Z | https://github.com/MonetDB/MonetDB/issues/7180 | 1,011,067,152 | 7,180 |
[
"MonetDB",
"MonetDB"
] | I am inserting a batch of data by copy binary, data structure (timestamp).
This is my original data

I use c#'s binarywriter to create binary files, this is my conversion program,
`public class DateTimeWriter : IColumnWriter
{
private const long _dt_to1970 = 352653098885316608;
private const long _dt_secondModifier = 1000000L;
private const long _dt_minuteModifier = 60000000L;
private const long _dt_hourModifier = 3600000000L;
private const long _dt_dayModifier = 137438953472L; //38 hours 11 minutes and ...
private const long _dt_monthModifier = 4398046511104L; //32L * dayModifier
private const long _dt_yearModifier = 52776558133248L; //12L * monthModifier
public void Write(BinaryWriter binaryWriter, object value)
{
binaryWriter.Write(ConvertDateTime(dt.Year, dt.Month, dt.Day, dt.Hour, dt.Minute, dt.Second, dt.Ticks));
}
private static long ConvertDateTime(int year, int month, int day, int hour, int minute, int second, long ticks)
{
return _dt_to1970
+ _dt_yearModifier * (year - 1970)
+ _dt_monthModifier * (month - 1)
+ _dt_dayModifier * (day - 1)
+ _dt_hourModifier * hour
+ _dt_minuteModifier * minute
+ _dt_secondModifier * second
+ ticks % 10000000L / 10;
}
}`
This is the converted binary file.
[BinaryData.zip](https://github.com/MonetDB/MonetDB/files/7208792/BinaryData.zip)
I use sql statement like this
`copy binary into table1."7ba603337693449fb73883a19c087016" from ('C:\\BinaryData\\3df5dd91-1df3-4c6c-a372-b7fe2e57a041__rowid.bin', 'C:\\BinaryData\\3df5dd91-1df3-4c6c-a372-b7fe2e57a041_0.bin') no constraint;`
There is no problem executing on v11.37.11 version.

But when I execute it in v11.41.5, I get the following error:
#2021-09-22 15:55:39: DFLOWworker10: createExceptionInternal: !ERROR: MALException:sql.importColumn:42000!inconsistent row count in C:\BinaryData\3df5dd91-1df3-4c6c-a372-b7fe2e57a041_0.bin: expected 4, got 2
Is there any change in the new version? | v11.37.11 and v11.41.5 are the same SQL execution but the results are different. | https://api.github.com/repos/MonetDB/MonetDB/issues/7179/comments | 5 | 2021-09-22T07:57:17Z | 2024-06-27T13:16:24Z | https://github.com/MonetDB/MonetDB/issues/7179 | 1,003,965,432 | 7,179 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
If we create a remote table with 500+ or 1000+ columns and when try to select * from "ABC" throws an error as given below.
#client80: createExceptionInternal: !ERROR: SQLException:RAstatement2:42000!The number of projections don't match between the generated plan and the expected one: 1 != 1000
Is there any limitation in no of columns defined for a remote table?
**To Reproduce**
create a 1000 column table in primary DB, and create a remote table on secondary DB and when try to do a select * from "ABC" limit 5 it throws this error. Even if the table has or doesn't have data.
**Expected behavior**
It should return data or 0 data if no data are available.
**Screenshots**
**Software versions**
- MonetDB v11.41.5 (Jul2021)
- Cent OS 7 Core
- Installed from package
**Issue labeling **
Remote Table with large columns
**Additional context**
Add any other context about the problem here.
| Remote Table Throws Error - createExceptionInternal: !ERROR: SQLException:RAstatement2:42000!The number of projections don't match between the generated plan and the expected one: 1 != 1200 | https://api.github.com/repos/MonetDB/MonetDB/issues/7178/comments | 3 | 2021-09-22T05:57:48Z | 2024-06-27T13:16:23Z | https://github.com/MonetDB/MonetDB/issues/7178 | 1,003,846,783 | 7,178 |
[
"MonetDB",
"MonetDB"
] | I am using copy binary syntax, but when I jump to its documentation [page(),](https://www.monetdb.org/Documentation/Cookbooks/SQLrecipes/BinaryBulkLoad) The requested page could not be found.

| BinaryBulkLoad page could not be found!! | https://api.github.com/repos/MonetDB/MonetDB/issues/7177/comments | 2 | 2021-09-22T02:27:01Z | 2021-09-22T07:42:18Z | https://github.com/MonetDB/MonetDB/issues/7177 | 1,003,681,417 | 7,177 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
After inserting a second item in a table the automagical storage.sorted detection is lost.
According [Hannes Mühleisen](https://stackoverflow.com/a/29168239/16876775)
> MonetDB will detect it automagically if a column is sorted on load.
This used to work in v11.33.11 (Apr2019-SP1), but not anymore in v11.41.5 (Jul2021).
This automagical detection appears to be undocumented on monetdb.org.
**To Reproduce**
```
DROP TABLE IF EXISTS test;
CREATE TABLE test (
number integer NOT null
);
select "column","sorted" from storage() where "table"='test' and column='number';
INSERT INTO test VALUES (1);
select "column","sorted" from storage() where "table"='test' and column='number';
INSERT INTO test VALUES (2);
select "column","sorted" from storage() where "table"='test' and column='number';
INSERT INTO test VALUES (3);
select "column","sorted" from storage() where "table"='test' and column='number';
INSERT INTO test VALUES (2);
SELECT * FROM test;
```
**Expected behavior**
The automagical storage.sorted detection should work
**Software versions**
- MonetDB version v11.41.5 (Jul2021).
- Ubuntu 18.04.4]
- Installed from https://dev.monetdb.org/downloads/deb/ bionic
**Issue labeling **
automagical storage.sorted detection
| MonetDB sorted columns not working anymore? | https://api.github.com/repos/MonetDB/MonetDB/issues/7176/comments | 2 | 2021-09-10T09:43:10Z | 2021-09-10T13:01:54Z | https://github.com/MonetDB/MonetDB/issues/7176 | 993,068,088 | 7,176 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
When a database is stopped and started data is lost that was created with `INSERT INTO` in a transaction.
**To Reproduce**
```
#! /bin/sh
#
set -o errexit # Exit script on first error.
if monetdb status jrk
then
monetdb stop jrk
monetdb destroy -f jrk
fi
monetdb create jrk; monetdb release jrk
mclient='mclient --database=jrk'
table=voyages
curl 'https://dev.monetdb.org/Assets/VOC/voc_dump.sql.bz2' --output - |bzcat | $mclient
#bzcat /tmp/voc_dump.sql.bz2 | $mclient
# Data in two transactions lost on restart mserver.
$mclient -e << EOC1
START TRANSACTION;
DROP TABLE IF EXISTS ${table}_1;
DROP TABLE IF EXISTS ${table}_tmp1;
CREATE TABLE ${table}_1 AS SELECT * FROM $table;
COMMIT;
START TRANSACTION;
CREATE TABLE ${table}_tmp1 AS SELECT * FROM ${table}_1;
DELETE FROM ${table}_1;
INSERT INTO ${table}_1 SELECT * FROM ${table}_tmp1;
COMMIT;
EOC1
# Data without transaction around INSERT INTO: OK
$mclient -e << EOC2
START TRANSACTION;
DROP TABLE IF EXISTS ${table}_2;
DROP TABLE IF EXISTS ${table}_tmp2;
CREATE TABLE ${table}_2 AS SELECT * FROM $table;
COMMIT;
--START TRANSACTION;
CREATE TABLE ${table}_tmp2 AS SELECT * FROM ${table}_2;
DELETE FROM ${table}_2;
INSERT INTO ${table}_2 SELECT * FROM ${table}_tmp2;
--COMMIT;
EOC2
# Only one transaction: OK
$mclient -e << EOC3
START TRANSACTION;
DROP TABLE IF EXISTS ${table}_3;
DROP TABLE IF EXISTS ${table}_tmp3;
CREATE TABLE ${table}_3 AS SELECT * FROM $table;
--COMMIT;
--START TRANSACTION;
CREATE TABLE ${table}_tmp3 AS SELECT * FROM ${table}_3;
DELETE FROM ${table}_3;
INSERT INTO ${table}_3 SELECT * FROM ${table}_tmp3;
COMMIT;
EOC3
# One transaction around
$mclient -e << EOC4
--START TRANSACTION;
DROP TABLE IF EXISTS ${table}_4;
DROP TABLE IF EXISTS ${table}_tmp4;
CREATE TABLE ${table}_4 AS SELECT * FROM $table;
--COMMIT;
START TRANSACTION;
CREATE TABLE ${table}_tmp4 AS SELECT * FROM ${table}_4;
DELETE FROM ${table}_4;
INSERT INTO ${table}_4 SELECT * FROM ${table}_tmp4;
COMMIT;
EOC4
$mclient -e << EOC5
--START TRANSACTION;
DROP TABLE IF EXISTS ${table}_5;
DROP TABLE IF EXISTS ${table}_tmp5;
CREATE TABLE ${table}_5 AS SELECT * FROM $table;
--COMMIT;
--START TRANSACTION;
CREATE TABLE ${table}_tmp5 AS SELECT * FROM ${table}_5;
DELETE FROM ${table}_5;
INSERT INTO ${table}_5 SELECT * FROM ${table}_tmp5;
--COMMIT;
EOC5
#####################################
$mclient -e <<EOR
SELECT
(SELECT COUNT(1) as count FROM ${table}_1) as ${table}_1,
(SELECT COUNT(1) as count FROM ${table}_2) as ${table}_2,
(SELECT COUNT(1) as count FROM ${table}_3) as ${table}_3,
(SELECT COUNT(1) as count FROM ${table}_4) as ${table}_4,
(SELECT COUNT(1) as count FROM ${table}_5) as ${table}_5,
NULL
;
EOR
monetdb stop jrk
monetdb start jrk
$mclient -e <<EOR2
SELECT
(SELECT COUNT(1) as count FROM ${table}_1) as ${table}_1,
(SELECT COUNT(1) as count FROM ${table}_2) as ${table}_2,
(SELECT COUNT(1) as count FROM ${table}_3) as ${table}_3,
(SELECT COUNT(1) as count FROM ${table}_4) as ${table}_4,
(SELECT COUNT(1) as count FROM ${table}_5) as ${table}_5,
NULL
;
EOR2
monetdb stop jrk
monetdb destroy -f jrk
```
**Expected behavior**
Data should not be lost.
**Software versions**
MonetDB Database Server Toolkit v11.41.5 (Jul2021)
Ubuntu 18.04.4 LTS
Installed from https://dev.monetdb.org/downloads/deb/ bionic
**Issue labeling **
Bug that looses data.
| losing data on mserver restart with transactions and INSERT INTO | https://api.github.com/repos/MonetDB/MonetDB/issues/7175/comments | 2 | 2021-09-09T12:01:31Z | 2024-06-27T13:16:22Z | https://github.com/MonetDB/MonetDB/issues/7175 | 992,141,485 | 7,175 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
When we try to access the remote table created with below statement, we end up getting an error which says - only the 'monetdb' user can use non-sql languages. run mserver5 with --set mal_for_all=yes to change this.
CREATE REMOTE TABLE "ABC" ("id" integer,"desc" varchar(100)) ON 'mapi:monetdb://xx.xx.xx.xxx:50670/DBNAME' WITH USER 'USERNAME' PASSWORD 'Password';
It works though when the server is started by hand using mserver5 and passes "--set mal_for_all=yes", but how can we enable this property by default using monetdbd start method? Can we start this using nohup in Linux platform?
**To Reproduce**
Create a remote table in secondary DB using and try to access it. For example select count(*) from "ABC";
CREATE REMOTE TABLE "ABC" ("id" integer,"desc" varchar(100)) ON 'mapi:monetdb://xx.xx.xx.xxx:50670/DBNAME' WITH USER 'USERNAME' PASSWORD 'Password';
**Expected behavior**
The above query should result in row count of table without an error.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Software versions**
- MonetDB v11.41.5 (Jul2021)
- CentOS Linux 7 (Core)
- Installed
**Issue labeling **
Remote table feature
**Additional context**
Add any other context about the problem here.
| Remote table not accessible and throws - only the 'monetdb' user can use non-sql languages. run mserver5 with --set mal_for_all=yes to change this. | https://api.github.com/repos/MonetDB/MonetDB/issues/7174/comments | 4 | 2021-09-08T14:40:20Z | 2021-09-09T04:09:32Z | https://github.com/MonetDB/MonetDB/issues/7174 | 991,216,265 | 7,174 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
If truncate is in transaction then after restart of MonetDB the table is empty.
**To Reproduce**
```
create table test (col int);
insert into test values (1), (2), (3);
begin transaction;
create table test_new (col int);
commit;
begin transaction;
truncate table test_new; -- if I do not do this, then everything works as expected
insert into test_new select * from test;
drop table test;
alter table test_new rename to test;
commit;
select * from test;
-- restart monetdb
select * from test;
```
**Expected behavior**
The table should keep the data.
**Software versions**
- MonetDB version number Jul2021 | If truncate is in transaction then after restart of MonetDB the table is empty | https://api.github.com/repos/MonetDB/MonetDB/issues/7173/comments | 2 | 2021-09-03T07:32:55Z | 2024-06-27T13:16:21Z | https://github.com/MonetDB/MonetDB/issues/7173 | 987,497,263 | 7,173 |
[
"MonetDB",
"MonetDB"
] |
[dump.sql.zip](https://github.com/MonetDB/MonetDB/files/7084166/dump.sql.zip)
**Describe the bug**
Querying a merge table with more then one join return records from one child table only.
**Steps To Reproduce:**
I used the same dump of https://github.com/MonetDB/MonetDB/issues/6736 (attached)
*db restore*
```
monetdb_image="monetdb/monetdb"
monetdb_version="Jul2021"
docker pull ${monetdb_image}:${monetdb_version}
docker run -d \
--name monetdb-test \
-p 50000:50000 \
${monetdb_image}:${monetdb_version}
docker cp dump.sql monetdb-test:.
docker exec -it -u monetdb monetdb-test mclient demo
\</dump.sql
```
*query*
```
TRACE
select
"dim_periodi"."year4" as "c0",
"classi"."codice" as "c1",
count(*) as "m0"
from "dw_hospital"."dim_periodi" as "dim_periodi",
"dw_hospital"."facts_costi" as "facts_costi",
"dw_hospital"."dim_classi_movimenti" as "classi"
where "facts_costi"."periodo_id" = "dim_periodi"."id"
and "facts_costi"."classe_movimento_id" = "classi"."id"
group by "dim_periodi"."year4","classi"."codice"
;
```
*results (from one child table only)*
```
+------+------+------+
| c0 | c1 | m0 |
+======+======+======+
| 2018 | 1 | 1 |
| 2018 | 11 | 1 |
| 2018 | 2 | 1 |
| 2018 | 20 | 1 |
| 2018 | 38 | 1 |
+------+------+------+
5 tuples
+------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
| usec | statement |
+======+=======================================================================================================================================================+
| 9 | X_0=0@0:void := querylog.define("trace\nselect\n\"dim_periodi\".\"year4\" as \"c0\",\n\"classi\".\"codice\" as \"c1\",\ncount(*) as \"m0\"\nfrom \"dw |
: : _hospital\".\"dim_periodi\" as \"dim_periodi\",\n\"dw_hospital\".\"facts_costi\" as \"facts_costi\",\n\"dw_hospital\".\"dim_classi_movimenti\" as \"c :
: : lassi\"\nwhere \"facts_costi\".\"periodo_id\" = \"dim_periodi\".\"id\"\nand \"facts_costi\".\"classe_movimento_id\" = \"classi\".\"id\"\ngroup by \"d :
: : im_periodi\".\"year4\",\"classi\".\"codice\"\n;":str, "default_pipe":str, 53:int); :
| 6 | X_5=0:int := sql.mvc(); |
| 60 | X_6=[3]:bat[:str] := bat.pack("dw_hospital.":str, "dw_hospital.":str, "dw_hospital.":str); |
| 46 | X_1=[3]:bat[:str] := bat.pack("int":str, "varchar":str, "bigint":str); |
| 29 | X_2=[3]:bat[:int] := bat.pack(32:int, 16:int, 64:int); |
| 33 | C_7=[5]:bat[:oid] := sql.tid(X_5=0:int, "dw_hospital":str, "facts_costi_2018":str); |
| 33 | X_8=[5]:bat[:int] := sql.bind(X_5=0:int, "dw_hospital":str, "facts_costi_2018":str, "periodo_id":str, 0:int); |
| 34 | X_10=[37]:bat[:int] := sql.bind(X_5=0:int, "dw_hospital":str, "dim_periodi":str, "id":str, 0:int); |
| 30 | X_4=[3]:bat[:str] := bat.pack("c0":str, "c1":str, "m0":str); |
| 40 | X_3=[3]:bat[:int] := bat.pack(0:int, 0:int, 0:int); |
| 19 | C_11=[37]:bat[:oid] := sql.tid(X_5=0:int, "dw_hospital":str, "dim_periodi":str); |
| 32 | C_9=[7]:bat[:oid] := sql.tid(X_5=0:int, "dw_hospital":str, "dim_classi_movimenti":str); |
| 105 | X_12=[5]:bat[:int] := sql.bind(X_5=0:int, "dw_hospital":str, "facts_costi_2018":str, "classe_movimento_id":str, 0:int); |
| 65 | X_13=[7]:bat[:int] := sql.bind(X_5=0:int, "dw_hospital":str, "dim_classi_movimenti":str, "id":str, 0:int); |
| 28 | X_14=[37]:bat[:int] := algebra.projection(C_11=[37]:bat[:oid], X_10=[37]:bat[:int]); |
| 160 | X_15=[7]:bat[:str] := sql.bind(X_5=0:int, "dw_hospital":str, "dim_classi_movimenti":str, "codice":str, 0:int); |
| 68 | X_16=[5]:bat[:int] := algebra.projection(C_7=[5]:bat[:oid], X_12=[5]:bat[:int]); |
| 63 | X_18=[7]:bat[:int] := algebra.projection(C_9=[7]:bat[:oid], X_13=[7]:bat[:int]); |
| 43 | X_17=[37]:bat[:int] := sql.bind(X_5=0:int, "dw_hospital":str, "dim_periodi":str, "year4":str, 0:int); |
| 40 | (X_19=[5]:bat[:oid], X_20=[5]:bat[:oid]) := algebra.join(X_16=[5]:bat[:int], X_18=[7]:bat[:int], nil:BAT, nil:BAT, false:bit, nil:lng); # mergejoin_i |
: : nt :
| 15 | X_21=[5]:bat[:int] := algebra.projectionpath(X_19=[5]:bat[:oid], C_7=[5]:bat[:oid], X_8=[5]:bat[:int]); |
| 4 | X_22=0@0:void := language.pass(C_7=[5]:bat[:oid]); |
| 35 | (X_23=[5]:bat[:oid], X_24=[5]:bat[:oid]) := algebra.join(X_21=[5]:bat[:int], X_14=[37]:bat[:int], nil:BAT, nil:BAT, false:bit, nil:lng); # selectjoin |
: : ; select: sorted :
| 23 | X_25=[5]:bat[:str] := algebra.projectionpath(X_23=[5]:bat[:oid], X_20=[5]:bat[:oid], C_9=[7]:bat[:oid], X_15=[7]:bat[:str]); # project1_bte |
| 33 | X_26=[5]:bat[:int] := algebra.projectionpath(X_24=[5]:bat[:oid], C_11=[37]:bat[:oid], X_17=[37]:bat[:int]); # project1_int |
| 4 | X_27=0@0:void := language.pass(C_9=[7]:bat[:oid]); |
| 4 | X_28=0@0:void := language.pass(C_11=[37]:bat[:oid]); |
| 57 | (X_29=[5]:bat[:oid], C_30=[5]:bat[:oid]) := group.group(X_25=[5]:bat[:str]); # GRP_compare_consecutive_values, dense, !groups |
| 13 | (X_31=[5]:bat[:oid], C_32=[5]:bat[:oid]) := group.subgroupdone(X_26=[5]:bat[:int], X_29=[5]:bat[:oid]); |
| 11 | X_33=[5]:bat[:int] := algebra.projection(C_32=[5]:bat[:oid], X_26=[5]:bat[:int]); |
| 3 | X_34=0@0:void := language.pass(X_26=[5]:bat[:int]); |
| 24 | X_35=[5]:bat[:lng] := aggr.subcount(X_31=[5]:bat[:oid], X_31=[5]:bat[:oid], C_32=[5]:bat[:oid], false:bit); |
| 34 | X_36=[5]:bat[:str] := algebra.projection(C_32=[5]:bat[:oid], X_25=[5]:bat[:str]); |
| 6 | X_37=0@0:void := language.pass(X_31=[5]:bat[:oid]); |
| 3 | X_38=0@0:void := language.pass(C_32=[5]:bat[:oid]); |
| 4 | X_39=0@0:void := language.pass(X_25=[5]:bat[:str]); |
| 1565 | barrier X_40=false:bit := language.dataflow(); |
| 112 | X_41=3:int := sql.resultSet(X_6=[3]:bat[:str], X_4=[3]:bat[:str], X_1=[3]:bat[:str], X_2=[3]:bat[:int], X_3=[3]:bat[:int], X_33=[5]:bat[:int], X_36=[ |
: : 5]:bat[:str], X_35=[5]:bat[:lng]); :
+------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
38 tuples
```
**Expected behavior**
As per Oct2020-SP5:
```
select
"dim_periodi"."year4" as "c0",
"classi"."codice" as "c1",
count(*) as "m0"
from "dw_hospital"."dim_periodi" as "dim_periodi",
"dw_hospital"."facts_costi" as "facts_costi",
"dw_hospital"."dim_classi_movimenti" as "classi"
where "facts_costi"."periodo_id" = "dim_periodi"."id"
and "facts_costi"."classe_movimento_id" = "classi"."id"
group by "dim_periodi"."year4","classi"."codice"
;
+------+------+------+
| c0 | c1 | m0 |
+======+======+======+
| 2017 | 1 | 1 |
| 2017 | 11 | 1 |
| 2017 | 2 | 1 |
| 2017 | 20 | 1 |
| 2017 | 38 | 1 |
| 2018 | 1 | 1 |
| 2018 | 11 | 1 |
| 2018 | 2 | 1 |
| 2018 | 20 | 1 |
| 2018 | 38 | 1 |
| 2019 | 1 | 1 |
| 2019 | 11 | 1 |
| 2019 | 2 | 1 |
| 2019 | 20 | 1 |
| 2019 | 38 | 1 |
+------+------+------+
15 tuples
sql>
```
In Jul2021 without the other join, the results come from three years table:
```
select
"dim_periodi"."year4" as "c0",
count(*) as "m0"
from "dw_hospital"."dim_periodi" as "dim_periodi",
"dw_hospital"."facts_costi" as "facts_costi"
where "facts_costi"."periodo_id" = "dim_periodi"."id"
group by "dim_periodi"."year4"
;
+------+------+
| c0 | m0 |
+======+======+
| 2017 | 5 |
| 2018 | 5 |
| 2019 | 5 |
+------+------+
```
**Screenshots**
**Software versions**
MonetDB Database Server Toolkit v11.41.5 (Jul2021)
docker image:
monetdb/monetdb Jul2021 e5a51a8d1129
**Issue labeling**
Bug
Jul2021
Merge Table
**Additional context**
| Unexpected query result with merge tables | https://api.github.com/repos/MonetDB/MonetDB/issues/7172/comments | 1 | 2021-08-31T13:13:43Z | 2024-06-27T13:16:20Z | https://github.com/MonetDB/MonetDB/issues/7172 | 983,848,859 | 7,172 |
[
"MonetDB",
"MonetDB"
] | Description:
SQL query is taking time as it has only 1000 keys in IN CLAUSE.
Tested in both version:
/monet_binaries/MonetDB-11.39.17_PY/bin/monetdbd
/monet_binaries/MonetDB-11.27.13_PY/bin/monetdbd
let us know if Details are required
Below Query Details :
SELECT DISTINCT "CN_DIM_WALGREEN_1".CONSUMER_DIM_KEY, "WALGREEN_CN_S_4_1_O".ATTR_VALUE FROM "CN_DIM_WALGREEN_1", "WALGREEN_CN_S_4_1_O" WHERE (("CN_DIM_WALGREEN_1".S_152_KEY = 2) AND ("CN_DIM_WALGREEN_1".CONSUMER_DIM_KEY IN (9247, 13947, 14237, 15499, 16953, 22325, 22975, 23080, 24344, 25307, 25924, 25950, 25969, 26032, 26056, 26469, 26940, 27780, 28167, 28853, 28910, 29690, 29827, 30011, 30404, 31949, 33586, 33866, 33944, 34154, 34507, 34649, 35245, 36114, 37037, 37134, 37856, 38046, 38981, 39069, 40026, 40315, 40838, 41322, 42148, 42155, 43016, 44067, 46021, 46632, 46883, 49054, 50396, 50469, 51572, 51956, 52462, 52663, 52978, 53006, 53626, 53971, 54365, 55538, 57869, 58164, 58506, 58612, 59792, 60074, 60106, 60372, 60767, 61046, 61425, 62053, 62445, 63661, 63694, 63785, 64208, 64316, 65112, 65385, 65803, 66159, 67129, 67165, 67323, 67356, 67556, 68312, 68877, 69068, 69331, 69973, 70135, 71706, 71783, 71864, 72200, 73201, 74933, 75475, 75525, 75603, 76396, 78121, 79465, 79617, 80172, 80356, 81171, 81241, 81599, 82217, 82253, 82506, 82527, 82615, 82888, 84108, 84347, 84875, 85038, 85554, 85874, 86045, 86074, 86894, 88015, 88276, 88804, 88845, 90111, 90250, 90282, 91056, 91182, 92965, 93727, 94505, 94596, 94773, 95162, 95633, 95754, 96098, 96183, 96394, 96960, 97809, 97874, 99856, 99868, 101492, 102548, 103427, 103878, 104072, 104105, 104724, 106241, 107142, 108536, 108878, 108889, 110467, 110832, 111818, 112041, 112117, 112468, 112591, 112850, 112970, 113677, 114735, 116345, 116988, 117116, 117587, 117739, 118372, 118760, 119248, 120179, 120659, 120851, 121352, 121558, 121953, 122263, 123124, 123574, 124067, 124913, 124997, 126542, 127138, 127381, 127436, 127523, 128712, 128833, 129861, 129960, 130443, 130685, 130824, 130865, 130904, 131067, 131118, 131641, 131710, 131869, 132062, 133030, 133205, 133291, 133409, 133470, 133733, 133898, 134129, 134625, 135056, 135179, 135495, 136116, 136158, 136265, 136353, 136359, 136375, 136580, 136622, 136659, 136717, 136866, 136903, 137218, 137389, 137395, 137436, 137576, 137654, 138179, 138473, 138630, 138675, 138850, 139288, 139338, 139757, 139802, 140083, 140269, 140767, 140990, 141257, 141567, 141597, 141878, 141952, 142137, 142238, 142309, 143550, 143818, 144429, 144481, 144539, 144684, 145326, 145700, 145811, 146331, 146411, 146658, 146725, 147001, 147146, 147153, 147162, 147243, 147274, 147566, 148601, 149179, 149270, 149734, 149973, 150003, 150028, 150098, 150275, 150319, 150464, 150582, 151098, 151207, 151219, 151750, 152786, 153066, 153366, 153825, 153879, 155402, 156533, 156613, 156671, 156831, 156834, 157220, 157762, 158201, 158337, 158362, 158480, 158820, 158916, 159000, 159178, 159301, 159972, 160852, 161216, 161217, 161450, 161616, 161751, 161971, 162106, 162423, 162866, 163152, 163325, 163984, 164678, 166199, 167031, 167142, 167448, 169155, 170322, 170569, 170703, 170756, 170810, 171903, 172160, 172734, 173666, 174021, 174105, 174408, 174697, 175035, 175292, 175602, 175615, 177730, 179476, 179952, 180222, 180643, 180868, 182320, 184100, 184482, 185231, 185277, 185366, 186154, 186201, 186419, 187845, 188350, 192499, 192790, 192960, 194343, 194381, 194504, 195074, 195281, 196479, 198606, 199923, 200106, 200929, 202284, 203076, 203214, 203928, 204352, 204597, 204967, 205032, 206008, 206240, 206275, 207747, 208731, 209595, 211458, 212086, 212555, 213883, 213940, 214294, 214740, 215378, 215768, 215801, 215842, 215991, 216051, 216088, 216605, 216697, 217212, 218927, 220001, 222019, 223093, 224226, 226273, 226518, 226665, 227187, 229436, 230337, 231405, 231680, 232127, 232786, 233315, 235048, 236114, 236532, 236755, 237499, 237860, 238802, 242567, 245309, 246077, 247707, 247862, 248174, 248328, 252574, 254782, 254959, 256332, 256953, 258169, 258263, 259144, 260040, 261176, 262256, 262335, 262484, 263588, 263887, 264095, 265147, 267130, 268421, 268969, 268986, 269389, 269820, 269932, 270282, 270715, 270779, 271167, 271578, 273379, 274932, 275882, 276118, 277241, 278377, 282698, 283321, 283777, 285093, 285995, 286107, 286266, 287262, 291896, 292099, 294518, 295195, 295379, 295575, 296064, 296431, 296730, 299146, 300272, 302418, 303217, 304194, 306410, 306837, 307001, 307687, 307996, 308426, 309032, 310091, 310127, 310685, 311013, 312498, 312789, 314041, 314443, 314831, 315034, 315852, 316170, 317291, 318362, 319781, 319975, 320207, 320442, 320833, 321858, 323041, 325753, 326974, 327407, 329673, 329777, 330344, 331863, 332398, 332538, 332942, 333446, 333472, 333537, 333821, 333866, 334848, 338293, 338481, 339483, 339639, 340502, 341357, 341725, 343321, 343508, 344572, 344643, 345769, 345958, 346290, 347532, 348468, 350664, 351168, 351231, 351485, 351854, 354351, 354549, 354870, 356105, 356487, 358930, 359449, 360257, 361048, 362809, 363355, 364086, 366744, 367293, 367791, 368356, 368735, 369680, 369995, 370786, 372102, 372933, 373893, 374939, 377880, 378391, 379803, 380905, 383584, 383847, 385342, 386512, 388553, 390074, 390790, 390792, 391353, 392997, 393032, 393632, 394483, 396624, 397175, 398013, 398576, 399559, 401610, 402163, 405914, 406823, 407090, 409400, 409935, 410718, 411225, 411892, 411954, 412538, 413415, 414037, 414051, 414293, 414550, 414608, 414905, 415547, 417064, 417378, 417962, 418026, 418663, 420478, 421574, 422327, 424049, 424286, 424530, 425027, 425280, 425654, 426107, 426685, 427053, 427304, 427541, 429904, 430844, 431491, 433390, 433741, 433782, 434941, 434949, 436027, 436762, 438994, 440594, 440678, 440758, 442166, 442568, 442756, 443133, 443423, 446319, 447989, 449625, 451193, 452325, 454419, 458042, 459943, 461104, 462866, 463038, 463202, 463430, 464377, 464877, 466237, 468657, 469925, 470715, 471163, 471361, 472017, 472536, 472853, 473621, 473889, 473950, 474202, 474373, 476742, 477619, 478702, 478885, 479291, 481431, 481879, 482263, 483058, 483164, 483178, 483560, 485934, 486463, 486488, 486525, 486559, 486897, 487082, 487160, 487202, 488324, 490005, 491157, 491783, 492724, 492892, 493682, 496395, 497331, 498098, 498336, 498837, 498942, 499637, 500555, 502292, 502630, 502737, 503283, 503603, 503687, 504087, 504217, 505020, 505283, 506093, 506569, 506717, 507028, 507259, 508241, 509861, 511144, 511541, 511955, 512363, 513278, 513570, 513835, 514321, 514600, 514978, 515693, 516339, 516583, 517093, 517348, 517492, 518720, 520258, 520709, 522162, 522273, 522647, 522671, 523995, 524999, 525038, 527317, 528066, 528447, 529939, 531946, 532574, 533448, 538278, 538469, 538626, 540452, 541348, 541958, 545372, 546578, 547171, 548552, 548559, 548680, 548832, 549455, 549669, 552066, 556290, 557641, 557719, 557899, 559868, 561423, 563190, 564442, 569089, 570513, 572781, 575613, 577068, 577474, 581300, 582432, 585013, 587170, 587822, 588545, 588676, 588898, 590873, 591757, 592272, 592607, 593707, 595130, 596765, 597727, 599649, 600942, 601116, 601819, 602602, 603187, 603332, 603845, 604716, 605094, 605187, 606535, 606975, 607063, 608488, 608691, 610501, 610973, 611091, 613375, 613385, 613769, 617118, 617246, 617353, 618101, 619449, 620092, 620132, 620324, 621127, 621201, 622422, 622601, 624274, 624984, 625153, 626490, 626788, 628072, 628642, 628651, 628678, 628807, 628969, 629495, 629599, 633769, 634208, 636573, 636614, 637680, 640619, 643820, 646737, 647891, 648268, 648684, 649206, 649282, 649421, 649892, 651276, 651524, 653838, 654060, 654465, 654608, 654669, 658297, 658731, 661115, 661887, 662268, 662572, 663541, 667541, 669201, 673656, 675435, 676134, 676628, 677972, 678222, 679000, 679805, 681482, 682712, 683298, 684340, 684471, 684579, 685153, 686323, 687148, 687937, 688354, 688504, 688651, 688679, 689801, 692565, 692954, 693810, 695024, 695223, 697200, 697863, 699263, 699591, 701402, 702678, 704279, 704489, 704601, 705075, 705546, 705581, 705613, 705956, 707833, 707864, 708951, 709253, 709474, 709974, 710466, 712279, 713069, 713255, 713419, 714161, 716358, 716961, 717372, 717746, 719639, 720391, 721575, 723128, 725950, 726551, 728305, 728374, 729343, 733168, 734221, 734401, 737569, 738021, 738025, 738431, 742870, 743203, 743574, 747446, 748813, 749428))) AND ("CN_DIM_WALGREEN_1".S_4_KEY = "WALGREEN_CN_S_4_1_O".AVP_KEY)
| Monet DB - SQL taking time for 5minutes Plus in query we have 1000 keys in IN clause | https://api.github.com/repos/MonetDB/MonetDB/issues/7171/comments | 11 | 2021-08-25T16:05:06Z | 2024-06-07T12:04:01Z | https://github.com/MonetDB/MonetDB/issues/7171 | 979,376,680 | 7,171 |
[
"MonetDB",
"MonetDB"
] | As per title I am having issues getting monetdb Oct2020-SP5 Bugfix Release (11.39.17) to work on the Ubuntu 20.04.
I followed the instructions here: https://www.monetdb.org/downloads/deb/. When it comes to install the packages I specified the version explicitly via:
`sudo apt install monetdb5-sql=11.39.17 monetdb5-server=11.39.17 monetdb-client=11.39.17`
After the installation completes, when I try to start mserver5 I get this error:
`mserver5: error while loading shared libraries: libbat.so.21: cannot open shared object file: No such file or directory`
I installed the same version of monetdb before on other machines running the same OS, and mserver5 was able to start: I think something wrong might have happened to the repository after the 11.41.5 release.
Any help might be greatly appreciated. | Cannot install 11.39.17 on Ubuntu 20.04 LTS | https://api.github.com/repos/MonetDB/MonetDB/issues/7170/comments | 2 | 2021-08-24T09:55:30Z | 2021-08-24T11:06:06Z | https://github.com/MonetDB/MonetDB/issues/7170 | 977,920,860 | 7,170 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I have a system developed in Delphi that connects to MonetDB using MAPI.
This system uses multiple concurrent threads.
When I updated the MonetDB version to "11.41.5", this error here started to appear sometimes:
42000!CREATE OR REPLACE FUNCTION: transaction conflict detected
This error occurs when I connect and run this command in MonetDB:
create or replace function wk_pcre_imatch(s string, pattern string) returns BOOLEAN external name pcre.imatch
I don't know if this is a BUG or if it's something I need to change in the system to make it compatible with some change in the MonetDB version.
Can you help me?
**To Reproduce**
Create multiple threads that simultaneously connect and execute the command "create or replace function wk_pcre_imatch(s string, pattern string) returns BOOLEAN external name pcre.imatch".
I think the programming language doesn't matter. Just use MAPI.
**Software versions**
- MonetDB version number 11.41.5
- OS and version: Windows 10 64 bits
- Installed from release package | CREATE OR REPLACE FUNCTION: transaction conflict detected | https://api.github.com/repos/MonetDB/MonetDB/issues/7169/comments | 6 | 2021-08-20T19:21:08Z | 2021-08-27T13:31:55Z | https://github.com/MonetDB/MonetDB/issues/7169 | 975,861,838 | 7,169 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
Every MAL command/pattern comes with a multiline comment, which can be retrieved using
SELECT * FROM sys.functions WHERE module='alarm' and function='sleep';
**To Reproduce**
In the transition to static binding this functionality is gone, while the comments
are still available in the code. It is partly hidden by the NDEBUG, but this
information is also relevant to interpret the EXPLAIN and TRACE results by SQL users.
**Expected behavior**
This should be fixed and all MAL signature structures should be accessible from SQL
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Software versions**
- MonetDB version number [Jul2021, defaultl]
- OS and version: [e.g. Ubuntu 18.04]
- Installed from release package or self-installed and compiled
| Loosing the documentation | https://api.github.com/repos/MonetDB/MonetDB/issues/7168/comments | 0 | 2021-08-20T18:55:33Z | 2024-06-27T13:16:18Z | https://github.com/MonetDB/MonetDB/issues/7168 | 975,846,570 | 7,168 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
Fount two problems with the `sys.shutdown()` function.
1. `call sys.shutdown(<int>)` doesn't shutdown the server, nor the calling (m)client. In the current `mclient`, I can still run queries normally, but if I try to establish a new (m)client connection, I get "system shutdown in progress, please try again later"
2. `call sys.shutdown(<int>, <force = true>)` causes `mclient` to exist with the message "unexpected end of file" and mserver5 to exist with an AddressSanitizer error (hg id: afedb69bc1e7):
```
=================================================================
==1450314==ERROR: AddressSanitizer: heap-use-after-free on address 0x617000080028 at pc 0x7f26707a135d bp 0x7f25fe16d920 sp 0x7f25fe16d910
WRITE of size 8 at 0x617000080028 thread T38
#0 0x7f26707a135c in join_detached_threads /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:836
#1 0x7f2670699b43 in GDKprepareExit /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_utils.c:1196
#2 0x7f2671279e23 in mal_reset /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal.c:117
#3 0x7f267127a0f5 in mal_exit /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal.c:167
#4 0x7f26714a5ed7 in CLTshutdown /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/modules/mal/clients.c:847
#5 0x7f266a4f362a in SQLshutdown_wrap /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/sql/backends/monet5/sql.c:281
#6 0x7f26712d05dd in runMALsequence /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_interpreter.c:645
#7 0x7f26712cd4f5 in runMAL /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_interpreter.c:334
#8 0x7f266a54b4e4 in SQLrun /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/sql/backends/monet5/sql_execute.c:249
#9 0x7f266a54eb4d in SQLengineIntern /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/sql/backends/monet5/sql_execute.c:688
#10 0x7f266a548d58 in SQLengine /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/sql/backends/monet5/sql_scenario.c:1304
#11 0x7f26713132c6 in runPhase /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_scenario.c:449
#12 0x7f267131359f in runScenarioBody /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_scenario.c:475
#13 0x7f26713139db in runScenario /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_scenario.c:507
#14 0x7f2671317297 in MSserveClient /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_session.c:501
#15 0x7f267131660e in MSscheduleClient /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_session.c:388
#16 0x7f26714e8683 in doChallenge /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/modules/mal/mal_mapi.c:212
#17 0x7f267069c5fb in THRstarter /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_utils.c:1643
#18 0x7f26707a0dc1 in thread_starter /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:789
#19 0x7f266f90f431 in start_thread (/lib64/libpthread.so.0+0x9431)
#20 0x7f266f83b912 in __GI___clone (/lib64/libc.so.6+0x101912)
0x617000080028 is located 40 bytes inside of 680-byte region [0x617000080000,0x6170000802a8)
freed by thread T38 here:
#0 0x7f2671a34307 in __interceptor_free (/lib64/libasan.so.6+0xb0307)
#1 0x7f26707a0caa in rm_posthread_locked /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:770
#2 0x7f26707a0cf4 in rm_posthread /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:777
#3 0x7f26707a13eb in join_detached_threads /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:839
#4 0x7f2670699b43 in GDKprepareExit /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_utils.c:1196
#5 0x7f2671279e23 in mal_reset /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal.c:117
#6 0x7f267127a0f5 in mal_exit /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal.c:167
#7 0x7f26714a5ed7 in CLTshutdown /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/modules/mal/clients.c:847
#8 0x7f266a4f362a in SQLshutdown_wrap /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/sql/backends/monet5/sql.c:281
#9 0x7f26712d05dd in runMALsequence /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_interpreter.c:645
#10 0x7f26712cd4f5 in runMAL /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_interpreter.c:334
#11 0x7f266a54b4e4 in SQLrun /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/sql/backends/monet5/sql_execute.c:249
#12 0x7f266a54eb4d in SQLengineIntern /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/sql/backends/monet5/sql_execute.c:688
#13 0x7f266a548d58 in SQLengine /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/sql/backends/monet5/sql_scenario.c:1304
#14 0x7f26713132c6 in runPhase /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_scenario.c:449
#15 0x7f267131359f in runScenarioBody /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_scenario.c:475
#16 0x7f26713139db in runScenario /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_scenario.c:507
#17 0x7f2671317297 in MSserveClient /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_session.c:501
#18 0x7f267131660e in MSscheduleClient /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_session.c:388
#19 0x7f26714e8683 in doChallenge /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/modules/mal/mal_mapi.c:212
#20 0x7f267069c5fb in THRstarter /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_utils.c:1643
#21 0x7f26707a0dc1 in thread_starter /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:789
#22 0x7f266f90f431 in start_thread (/lib64/libpthread.so.0+0x9431)
previously allocated by thread T3 here:
#0 0x7f2671a34667 in __interceptor_malloc (/lib64/libasan.so.6+0xb0667)
#1 0x7f26707a192d in MT_create_thread /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:874
#2 0x7f267069cbae in THRcreate /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_utils.c:1675
#3 0x7f26714e99eb in SERVERlistenThread /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/modules/mal/mal_mapi.c:432
#4 0x7f26707a0dc1 in thread_starter /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:789
#5 0x7f266f90f431 in start_thread (/lib64/libpthread.so.0+0x9431)
Thread T38 created by T3 here:
#0 0x7f26719dbbe5 in __interceptor_pthread_create (/lib64/libasan.so.6+0x57be5)
#1 0x7f26707a1e66 in MT_create_thread /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:899
#2 0x7f267069cbae in THRcreate /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_utils.c:1675
#3 0x7f26714e99eb in SERVERlistenThread /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/modules/mal/mal_mapi.c:432
#4 0x7f26707a0dc1 in thread_starter /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:789
#5 0x7f266f90f431 in start_thread (/lib64/libpthread.so.0+0x9431)
Thread T3 created by T0 here:
#0 0x7f26719dbbe5 in __interceptor_pthread_create (/lib64/libasan.so.6+0x57be5)
#1 0x7f26707a1e66 in MT_create_thread /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:899
#2 0x7f26714eca47 in SERVERlisten /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/modules/mal/mal_mapi.c:795
#3 0x7f26714ed132 in SERVERlisten_default /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/modules/mal/mal_mapi.c:848
#4 0x7f267131e2fe in initModule /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_prelude.c:91
#5 0x7f2671321b2b in malPrelude /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_prelude.c:443
#6 0x7f2671321e6f in malIncludeModules /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_prelude.c:477
#7 0x7f2671313cea in malBootstrap /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal_session.c:57
#8 0x7f2671279d1c in mal_init /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/monetdb5/mal/mal.c:91
#9 0x407c91 in main /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/tools/mserver/mserver5.c:808
#10 0x7f266f761041 in __libc_start_main (/lib64/libc.so.6+0x27041)
SUMMARY: AddressSanitizer: heap-use-after-free /export/scratch1/home/zhang/monetdb/mdb-src/Jul2021/gdk/gdk_system.c:836 in join_detached_threads
```
**To Reproduce**
Just run `call sys.shutdown(1);` or `call sys.shutdown(1, true);` in a (m)client.
**Expected behavior**
`mserver5` should shutdown without errors. `mclient` should not give errors.
**Software versions**
- MonetDB version number Jul2021 (hg id: afedb69bc1e7)
- OS and version: Fedora 33
- self-installed and compiled
| sys.shutdown() problems | https://api.github.com/repos/MonetDB/MonetDB/issues/7167/comments | 3 | 2021-08-19T13:04:34Z | 2024-06-27T13:16:18Z | https://github.com/MonetDB/MonetDB/issues/7167 | 974,642,156 | 7,167 |
[
"MonetDB",
"MonetDB"
] | When we run any SQL with any math functions like SUM, AVG, etc, we are getting the below error from the driver.
**ExceptionLogger::GetExceptionMessage: Xerces exception caught XERCES_CPP_NAMESPACE::DOMException - can't supply error message**
**To Reproduce**
If we run the below pattern query, we find this issue.
SELECT SUM(DISTINCT <COLUMN_NAME>) FROM "TABLE_NAME" WHERE <COLUMN_NAME> = XXXXX;
**Expected behavior**
The query should execute fine and give the results of the sum of values from the column.
**Software versions**
- MonetDB v11.39.17 (Oct2020-SP5)
- CentOS 7
- Compiled from Sources
**Issue labeling **
The issue with ODBC Driver came as bundled
One findings-
With the new driver, we see column names are as below... %15, %16 etc...
+---------------------+---------------------+---------------------+
| %15 | %16 | %17 |
+---------------------+---------------------+---------------------+
| 126881638 | 126881812 | 126877634 |
+---------------------+---------------------+---------------------+
The same with old drivers, they are L3, L6, L11 etc... Will this cause anything specific to application we use?
+---------------------+---------------------+---------------------+
| L3 | L6 | L11 |
+---------------------+---------------------+---------------------+
| 126881638 | 126881812 | 126877634 |
+---------------------+---------------------+---------------------+
| Issue while using Math functions with MonetDB v11.39.17 (Oct2020-SP5) ODBC Driver | https://api.github.com/repos/MonetDB/MonetDB/issues/7166/comments | 4 | 2021-08-18T11:02:10Z | 2021-09-08T14:33:23Z | https://github.com/MonetDB/MonetDB/issues/7166 | 973,542,904 | 7,166 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
This is adapted from the Stack Overflow post recently added: https://stackoverflow.com/questions/68717481/running-a-distributed-join-query-on-merged-remote-tables
According to @joerivanruth, it seems like there is a bug w.r.t running distributed join queries on merged remote tables. I've gone ahead and setup a cluster of DB's where two simple tables (one table referencing the other), `s` and `t`, is sharded between two DB's. After setting up the leader/master node with the relevant `REMOTE` and `MERGE` tables (as well as adding constraints to each table, `PRIMARY` and `FOREIGN` `KEYS`), I try to run a simple join query on both merged tables and get a `JOINIDX: missing '.'` error.
_Brief thoughts on the `PLAN`:_
I assume that when looking at the join query `PLAN` I should see some shards of one table (say a shard of `s`) being joined with the entire other table (entire `t` table). However, when looking at the actual `PLAN` for the join query, it looks like the joins are pushed down to each DB which then only joins the shards of the tables that are resident in that DB (instead each shard on one table on the remote DB should be joined with the entire other table)(in the "example" then only a shard of `s` is joined with a shard of `t` on the remote DB's).
**To Reproduce**
_Summary of setup_
Goal: Have 1 leader node (L) and 2 follower nodes (F1, F2) where `s` and `t` tables are sharded amongst the follower nodes (F1, F2). The leader node (L) only has `REMOTE` and `MERGE` tables and runs all queries.
`s` table: Sharded in half (1st two rows on F1, rest on F2)
| primary_key | value |
| ----------- | ----- |
| 0 | 42 |
| 1 | 35 |
| 2 | 2 |
| 3 | 10 |
`t` table: Sharded in half (1st two rows on F1, rest on F2). Foreign key references `s` table. Note that each shard of `t` will refer to 2 different shards of `s` (on separate followers).
| primary_key | foreign_key | value |
| ----------- | ----------- | ----- |
| 0 | 0 | abc |
| 1 | 2 | efg |
| 2 | 3 | hij |
| 3 | 1 | lmn |
1. Similar to [this](https://www.monetdb.org/Documentation/ServerAdministration/DistributedQueryProcessing) link, setup 3 servers on the same machine on different ports.
2. Create `s` and `t` sharded tables on the F1 and F2. Add a `PRIMARY KEY` constraint on each table shard at this step. Also add elements to the tables.
3. Add a `FOREIGN KEY` constraint on table `t`'s shards referencing the combined `s` table (a `MERGE` table containing all of `s`'s shards). This is needed so that the foreign key constraint `ALTER` statement doesn't error.
4. On the L node, create `MERGE` tables for the `s` and `t` tables (making sure to set the relevant primary and foreign keys).
5. Run a `SELECT * FROM s, t WHERE s_primary_key = t_foreign_key;` query. Error happens here.
_GitHub Link to Reproducible Code_
https://github.com/abejgonzalez/mdb-example
**Expected behavior**
I expect that the join query finishes without an error `JOINIDX: missing '.'`.
**Screenshots**
N/A
**Software versions**
- `monetdb --version` : `MonetDB Database Server Toolkit v11.39.17 (Oct2020-SP5)`
- `monetdbd --version` : `MonetDB Database Server v11.39.17 (Oct2020-SP5)`
- `lsb_release -a` description: `Debian GNU/Linux 10 (buster)`
- Installed from a release package
| `JOINIDX: missing '.'` when running distributed join query on merged remote tables | https://api.github.com/repos/MonetDB/MonetDB/issues/7165/comments | 20 | 2021-08-10T17:15:55Z | 2024-06-27T13:16:16Z | https://github.com/MonetDB/MonetDB/issues/7165 | 965,160,511 | 7,165 |
[
"MonetDB",
"MonetDB"
] | In the current version, it is not possible to introduce global variables, although some are defined implicitly in the code base.
It should be possible to introduce variables at the schema level again.
Furthermore, consider predefined variables, such as 'optimizer' to be turned into keyword with associated rule
SET OPTIMIZER 'minimal-path'
| Alignment of global variables and predefined ones | https://api.github.com/repos/MonetDB/MonetDB/issues/7164/comments | 8 | 2021-08-08T08:58:58Z | 2023-04-28T09:10:35Z | https://github.com/MonetDB/MonetDB/issues/7164 | 963,387,165 | 7,164 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
A query involving multiple instances of the same view results in a MAL plan with multiple `sql.mvc()` invocations, which blows up the plan (`commonTerms` optimizer cannot de-duplicate)
**To Reproduce**
I reproduced the issue in a small script: [multiple_mvc.sql.txt](https://github.com/MonetDB/MonetDB/files/6937970/multiple_mvc.sql.txt)
The MAL plan generated by this script contains 2 `sql.mvc()` invocations, with some code duplication due to that.
Note that this is a stripped-down example. The real query where this comes from has more than 10 invocations, which generates cascades of code duplication for a MAL plan that takes 3 seconds just to be generated (in 11.35 the plan is generated in 150ms).
**Expected behavior**
Only 1 `sql.mvc()` invocation occurs in a MAL plan.
**Software versions**
- MonetDB version number 11.39.18
- OS and version: FC 34
- Compiled from sources
| Multiple sql.mvc() invocations in the same query | https://api.github.com/repos/MonetDB/MonetDB/issues/7163/comments | 16 | 2021-08-05T09:01:10Z | 2024-06-27T13:16:15Z | https://github.com/MonetDB/MonetDB/issues/7163 | 961,603,972 | 7,163 |
[
"MonetDB",
"MonetDB"
] | The sys.var_values() only provide a limited view on the predefined variables.
sql>select * from sys.var_values;
+------------------+-------------------+
| var_name | value |
+==================+===================+
| current_role | monetdb |
| current_schema | sys |
| current_timezone | 7200.000 |
| current_user | monetdb |
| debug | 0 |
| last_id | 0 |
| optimizer | default_pipe |
| pi | 3.141592653589793 |
| rowcnt | 1 |
+------------------+-------------------+
It should also include the pseudo variables:
USER, SESSION_USER, NOW,
| Extend sys.var_values table | https://api.github.com/repos/MonetDB/MonetDB/issues/7162/comments | 1 | 2021-08-02T11:21:26Z | 2024-06-27T13:16:15Z | https://github.com/MonetDB/MonetDB/issues/7162 | 958,040,757 | 7,162 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
A `LOCAL TEMPORARY` table remains empty after `INSERT INTO`
**To Reproduce**
```
sql>create local temporary table t(i int);
operation successful
sql>insert into t values(1);
1 affected row
sql>select * from t;
+---+
| i |
+===+
+---+
0 tuples
```
**Expected behavior**
Table `t` should contain 1 row.
**Software versions**
- MonetDB version number 11.39.18
- CentOS 7
- Compiled from sources
| LOCAL TEMPORARY table empty after insert | https://api.github.com/repos/MonetDB/MonetDB/issues/7160/comments | 3 | 2021-07-21T14:30:05Z | 2021-07-22T12:58:41Z | https://github.com/MonetDB/MonetDB/issues/7160 | 949,776,481 | 7,160 |
[
"MonetDB",
"MonetDB"
] | **Is your feature request related to a problem? Please describe.**
Applications may need to create temporary tables and views from different sessions, and those may use the same table/view names.
For tables, this is possible by using `CREATE LOCAL TEMPORARY TABLE`. With this, the scope of the table is limited to the current session, with no clashes among sessions.
However, the same is not allowed for views. I don't see a reason why not.
Note that the current implementation actually creates a problem, which can be seen as a bug by itself:
I can create a `LOCAL TEMPORARY` table, then a view that uses the table, exit the session, and find an invalid view when reopening a new session:
```
sql>create local temporary table t(i int);
operation successful
sql>insert into t values(1);
1 affected row
sql>create view v as select * from t;
operation successful
-- exit mclient and re-enter
sql>select * from v;
SELECT: no such table 't'
```
Note the PostgreSQL solves this issue by forcing a view as temporary when the objects it depends on are temporary:
> If any of the tables referenced by the view are temporary, the view is created as a temporary view (whether TEMPORARY is specified or not).
**Describe the solution you'd like**
`CREATE LOCAL TEMPORARY VIEW`
**Describe alternatives you've considered**
The only alternative I can think of is that the application uses different names per session. Of course it is possible, but it requires quite some more bookkeeping. On the other hand, if this feature already exists for tables, it seems logical to have it for views as well.
| CREATE LOCAL TEMPORARY VIEW | https://api.github.com/repos/MonetDB/MonetDB/issues/7159/comments | 4 | 2021-07-21T14:20:06Z | 2024-06-27T13:16:13Z | https://github.com/MonetDB/MonetDB/issues/7159 | 949,760,498 | 7,159 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
Python aggregate UDF returns garbage when run on empty table
**To Reproduce**
example of aggregate is from: https://www.monetdb.org/blog/embedded-pythonnumpy-monetdb
```
DROP AGGREGATE python_aggregate;
CREATE AGGREGATE python_aggregate(val INTEGER)
RETURNS INTEGER
LANGUAGE PYTHON {
try:
unique = numpy.unique(aggr_group)
x = numpy.zeros(shape=(unique.size))
for i in range(0, unique.size):
x[i] = numpy.sum(val[aggr_group==unique[i]])
except NameError:
# aggr_group doesn't exist. no groups, aggregate on all data
x = numpy.sum(val)
return(x)
};
DROP TABLE IF EXISTS test;
CREATE TABLE test (x INTEGER);
SELECT python_aggregate(x) FROM test;
-- returns this (if I play around with returning data types by switching to bigints, the number gets bigger)
sql>select python_aggregate(x) from test;
+------------+
| %1 |
+============+
| 1212796176 |
+------------+
1 tuple
sql:0.034 opt:0.242 run:0.098 clk:201.432 ms
```
**Expected behavior**
It should return 0.
**Software versions**
- MonetDB version number Oct2020-SP5
- OS and version: Ubuntu latest LTS in Docker container
- From deb package | Python aggregate UDF returns garbage when run on empty table | https://api.github.com/repos/MonetDB/MonetDB/issues/7158/comments | 1 | 2021-07-16T11:57:47Z | 2024-06-27T13:16:12Z | https://github.com/MonetDB/MonetDB/issues/7158 | 946,226,569 | 7,158 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
For a table-returning UDF, I have both a scalar and a bulk implementation, and they both work. For some reason, using a loop over the scalar implementation is preferred to using the bulk implementation.
However, I am not even sure if this choice is fully deterministic, I have been noticing seemingly unjustified switches between the two.
My understanding has always been that when a bulk implementation is available, then it is always preferred. But this isn't what I'm observing.
The function on which I'm observing this has many parameters, so the following is a bit verbose.
To make the context more clear, it's a tokenizer. It usually takes a whole relation with identifiers and strings and returns a longer relation, with identifiers and single tokens.
This is the SQL interface:
```
CREATE OR REPLACE FUNCTION tokenize(id integer, s string, d string, m tinyint, g tinyint, stemmer string,
cs boolean, asciify boolean, prob double)
RETURNS TABLE (id integer, token string, prob double)
EXTERNAL NAME spinque."UTF8tokenize_v2";
```
This is the MEL definition for the bulk implementation (it expects a relation in input):
```
command("batspinque", "UTF8tokenize_v2", SPQbat_utf8_tokenize, false, "",
args(3, 12, batarg("",int),batarg("",str),batarg("",dbl), batarg("id",int),batarg("s",str),
batarg("delims",str),batarg("min_tok_len",bte),batarg("grams",bte),
batarg("stemmer",str),batarg("cs",bit),batarg("asciify",bit),batarg("prob",dbl)))
```
This is the scalar implementation, in MAL (because it only covers corner-cases, it actually redirects to the bulk implementation, with singleton bats):
```
inline function spinque.UTF8tokenize_v2(id:int,s:str,delims:str,min_tok_len:bte,grams:bte,stemmer:str,cs:bit,asciify:bit,prob:dbl)
(:bat[:int],:bat[:str],:bat[:dbl]);
bid := bat.single(id);
bs := bat.single(s);
bdelims := bat.single(delims);
bmin_tok_len := bat.single(min_tok_len);
bgrams := bat.single(grams);
bstemmer := bat.single(stemmer);
bcs := bat.single(cs);
basciify := bat.single(asciify);
bprob := bat.single(prob);
(r1,r2,r3) := batspinque.UTF8tokenize_v2(bid,bs,bdelims,bmin_tok_len,bgrams,bstemmer,bcs,basciify,bprob);
return (r1,r2,r3);
end spinque.UTF8tokenize_v2;
```
**What happens**
The scalar implementation is chosen for a relation input:
```
SELECT * FROM tokenize( (SELECT .... FROM ...) );
```
If I bind the SQL signature directly to the bulk implementation (`... EXTERNAL NAME batspinque."UTF8tokenize_v2"`) , then it works as expected. This means the bulk implementation is correct. But, AFAIK, the SQL signature should always be bound to the scalar module.
Could the fact that one is written in MAL and one in C influence the priority?
**Expected behavior**
The bulk implementation is chosen for a relation input.
**Software versions**
- MonetDB version number 11.41.0 (but similar behaviours observed in 11.39.18)
- Fedora 34
- Compiled from sources
| Scalar implementation preferred to bulk implementation in table-returning UDFs | https://api.github.com/repos/MonetDB/MonetDB/issues/7157/comments | 4 | 2021-07-14T16:46:02Z | 2021-09-06T15:49:29Z | https://github.com/MonetDB/MonetDB/issues/7157 | 944,616,060 | 7,157 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
The query does not execute because an internal error (crash) occurs and nothing appears on the screen.
But inside MonetDB server an error occurs.
**To Reproduce**
Run the SELECT command below:
select
A c0,
count(distinct A) c1
from
(
select 'a' A, '1' B
union all
select 'a', '2'
union all
select 'a', '3'
union all
select 'b', '4'
union all
select 'b', '5'
union all
select 'c', '6'
) T
group by
grouping sets ((c0), ())
**Expected behavior**
The error message appears on the screen or the command will run successfully if there is no syntax problem.
**Screenshots**


**Software versions**
- MonetDB version number [MonetDB 5 server v11.39.17 (Oct2020-SP5)]
- OS and version: [Windows 10 Pro 64 bits - 20H2]
- Installed from release package
| Query produces this error and is not shown on the screen: !ERROR: Could not find (null).c0 | https://api.github.com/repos/MonetDB/MonetDB/issues/7156/comments | 2 | 2021-07-13T18:03:55Z | 2024-06-27T13:16:11Z | https://github.com/MonetDB/MonetDB/issues/7156 | 943,695,669 | 7,156 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
"count distinct" is not working properly.
**To Reproduce**
Run this SQL command:
select
A,
count(distinct A)
from
(
select 'a' A, '1' B
union all
select 'a', '2'
union all
select 'a', '3'
union all
select 'b', '4'
union all
select 'b', '5'
union all
select 'c', '6'
) T
group by
A
The result will be this:

**Expected behavior**
It should always return 1 in the second column.
**Software versions**
- MonetDB version number [MonetDB 5 server v11.39.17 (Oct2020-SP5)]
- OS and version: [Windows 10 Pro 64 bits - 20H2]
- Installed from release package | "count distinct" is not working properly | https://api.github.com/repos/MonetDB/MonetDB/issues/7155/comments | 1 | 2021-07-13T17:44:12Z | 2024-06-27T13:16:10Z | https://github.com/MonetDB/MonetDB/issues/7155 | 943,678,041 | 7,155 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I found behavior that looks different from what is shown in the documentation.
In the documentation shows this:
Names are used to designate database objects. In that role, they are by default in-sensitive case unless encapsulated by double quotes.
Document link:
https://www.monetdb.org/Documentation/SQLReference/LexicalStructure
But in practice, something else happens.
**To Reproduce**
Create this table:
create table teste2 ("BASECOMPETENCIA" text)
And then run this command:
select BASECOMPETENCIA from teste2
This error will happen:
Error: SELECT: identifier 'basecompetencia' unknown
SQLState: 42000
ErrorCode: 0
**Expected behavior**
The command will be executed successfully (case in-sensitive).
**Software versions**
- MonetDB version number [MonetDB 5 server v11.39.17 (Oct2020-SP5)]
- OS and version: [Windows 10 Pro 64 bits - 20H2]
- Installed from release package
| Identifiers and Keywords: problem with character case and double quotes | https://api.github.com/repos/MonetDB/MonetDB/issues/7154/comments | 6 | 2021-07-12T18:40:56Z | 2024-06-27T13:16:09Z | https://github.com/MonetDB/MonetDB/issues/7154 | 942,339,661 | 7,154 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
System functions that are included at compile time via `create_include_object` in `CMakeLists.txt` are created when the database is first created. They appear to lose their indentation. All lines start at column 0.
The same actually happens in 11.35, when the same functionality was handled via the `createdb` subfolder.
The same functions do keep their indentation when defined via mclient.
That is harmless for pure SQL functions, but it completely breaks functions whose body is in Python.
**Software versions**
- MonetDB version number: All releases? Found on 11.39.18
- OS and version: CentOS 7, Fedora 34
- compiled from sources
| System UDFs lose their indentation - Python functions broken | https://api.github.com/repos/MonetDB/MonetDB/issues/7153/comments | 2 | 2021-07-12T16:57:09Z | 2024-06-27T13:16:08Z | https://github.com/MonetDB/MonetDB/issues/7153 | 942,254,880 | 7,153 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
For the third time, on different databases, it happened that a properly shut down database would not restart, with the following errors found in `merovingian.log`:
```
2021-06-28 11:33:10 MSG merovingian[7]: database 'equip-vc_default01' (-1) has exited with exit status 0
2021-06-28 11:33:20 MSG merovingian[7]: starting database 'equip-vc_default01', up min/avg/max: 1s/1d/6d, crash average: 0.00 0.40 0.13 (8-4=4)
2021-06-28 11:33:21 MSG equip-vc_default01[51470]: arguments: /opt/monetdb/bin/mserver5 --dbpath=/var/lib/monetdb/dbfarm/equip-vc_default01 --set merovingian_uri=mapi:monetdb://f4c5ad81e6df:50000/equip-vc_default01 --set mapi_listenaddr=none --set mapi_usock=/var/lib/monetdb/dbfarm/equip-vc_default01/.mapi.sock --set
monet_vault_key=/var/lib/monetdb/dbfarm/equip-vc_default01/.vaultkey --set gdk_nr_threads=8 --set max_clients=64 --set sql_optimizer=sequential_pipe --set embedded_py=3 --set mal_for_all=yes
2021-06-28 11:33:21 ERR equip-vc_default01[51470]: #main thread: BBPcheckbats: !ERROR: BBPcheckbats: cannot stat file /var/lib/monetdb/dbfarm/equip-vc_default01/bat/05/513.tail (expected size 18536): No such file or directory
2021-06-28 11:33:23 MSG merovingian[7]: database 'equip-vc_default01' (-1) has exited with exit status 1
```
Storage is local SSD, I tend to exclude related issues.
**To Reproduce**
Unfortunately I am not able to reproduce it reliably. I can only say it never happened before Oct2020, and now it already happened 3 times, so I guess there is a bug in the storage layer triggered by some corner-case.
I know it's hard to find the cause without a test, I just hope this can ring a bell.
**Software versions**
- 11.39.18
- CentOS 7
- compiled from sources | Occasional dbfarm corruption upon database restart | https://api.github.com/repos/MonetDB/MonetDB/issues/7152/comments | 7 | 2021-07-09T12:02:27Z | 2024-06-27T13:16:07Z | https://github.com/MonetDB/MonetDB/issues/7152 | 940,701,903 | 7,152 |
[
"MonetDB",
"MonetDB"
] | Hi,
I'm inserting 74500 records through batch (JDBC driver) and it is taking around 15 minutes.
Why the insertion is too slow?
We don't want to insert data through csv | Insertion is too slow | https://api.github.com/repos/MonetDB/MonetDB/issues/7151/comments | 21 | 2021-07-08T09:56:48Z | 2024-06-27T13:16:06Z | https://github.com/MonetDB/MonetDB/issues/7151 | 939,684,848 | 7,151 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I downloaded the following release of Monetdb: MonetDB-Oct2020_SP5_release,
and tried to build it using cmake, however im getting the error shown below.
I am working with a fresh install of Ubuntu.
I cant figure out what I am doing wrong. Can anyone please point out what i might be missing.
Just to be clear, I installed odbcinst by running the command: sudo apt-get install -y odbcinst
**To Reproduce**
I then created a build folder inside the MonetDB-Oct2020_SP5_release directory
When i run the following command : `cmake -DCMAKE_BUILD_TYPE=Release ..`
I get the following error:
```
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
ODBCINST_LIBRARIES (ADVANCED)
linked by target "MonetODBC" in directory /home/arif/Documents/MonetDB-Oct2020_SP5_release/clients/odbc/driver
```
**Expected behavior**
I was hoping the build to be successfule
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Software versions**
- Monetdb: MonetDB-Oct2020_SP5_release
- Ubuntu 20.04
-self insalled
**Issue labeling **
Bug
**Additional context**
Im not sure what I am doing wrong. I installed a fresh copy of Ubuntu, but cant figure out what I am doing wrong
Any help would be greatly appreciated
| Error When trying to build MonetDB-Oct2020_SP5_release from Source | https://api.github.com/repos/MonetDB/MonetDB/issues/7150/comments | 3 | 2021-07-08T09:35:08Z | 2021-08-26T07:38:11Z | https://github.com/MonetDB/MonetDB/issues/7150 | 939,666,679 | 7,150 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
can't open two embedded databases on disk
**To Reproduce**
#include "monetdbe.h"
#include <stdio.h>
int main() {
monetdbe_database db1 = NULL;
monetdbe_database db2 = NULL;
if (monetdbe_open(&db1, "/tmp/db1", NULL)) {
fprintf(stderr, "Failed to open db1\n");
return -1;
}
if (monetdbe_open(&db2, "/tmp/db2", NULL)) {
fprintf(stderr, "Failed to open db2\n");
return -1;
}
monetdbe_close(db1);
monetdbe_close(db2);
}
**Expected behavior**
open more than one embedded database on disk
**Software versions**
- MonetDB-Oct2020_17
- centos 7
- built from source
| monetdbe: can't open two embedded databases on disk | https://api.github.com/repos/MonetDB/MonetDB/issues/7149/comments | 2 | 2021-07-06T15:36:02Z | 2024-06-27T13:16:05Z | https://github.com/MonetDB/MonetDB/issues/7149 | 938,024,516 | 7,149 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
Select distinct is not working correctly.
It is only returning 1 record and should return 19.
**To Reproduce**
Run the command below:
select
distinct c1, c2
from (
select 'A' c0, 'a' c1, 1 c2
union all select 'A', 'a', 2
union all select 'A', 'b', 3
union all select 'B', 'a', 4
union all select 'B', 'b', 5
union all select 'C', 'c', 6
union all select 'C', 'a', 7
union all select 'C', 'b', 8
union all select 'C', 'c', 9
union all select 'D', 'd', 0
union all select 'F', 'a', 1
union all select 'F', 'b', 2
union all select 'E', 'c', 3
union all select 'G', 'd', 4
union all select 'G', 'e', 5
union all select 'G', 'a', 6
union all select 'G', 'b', 7
union all select 'H', 'c', 8
union all select 'A', 'd', 9
union all select 'B', 'e', 0
) T
**Expected behavior**
Return 19 records, as shown below:

**Screenshots**

**Software versions**
- MonetDB version number [MonetDB 5 server v11.39.17 (Oct2020-SP5)]
- OS and version: [Windows 10 Pro 64 bits - 20H2]
- Installed from release package | Select distinct is not working correctly | https://api.github.com/repos/MonetDB/MonetDB/issues/7148/comments | 1 | 2021-06-25T18:41:22Z | 2024-06-27T13:16:04Z | https://github.com/MonetDB/MonetDB/issues/7148 | 930,416,750 | 7,148 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
When executing a certain SELECT command with syntax error through "SQuirreL SQL" (with JDBC driver), nothing appears on the screen.
But inside MonetDB server an error occurs.
**To Reproduce**
Run the SELECT command below:
select c0, sum(c2) from (select 'A' c0, 'a' c1, 1 c2 union all select 'A', 'a', 2) T
**Expected behavior**
The error message appears on the screen.
**Screenshots**


**Software versions**
- MonetDB version number [MonetDB 5 server v11.39.17 (Oct2020-SP5)]
- OS and version: [Windows 10 Pro 64 bits - 20H2]
- Installed from release package | Internal error occurs and is not shown on the screen | https://api.github.com/repos/MonetDB/MonetDB/issues/7147/comments | 1 | 2021-06-25T18:32:08Z | 2024-06-27T13:16:04Z | https://github.com/MonetDB/MonetDB/issues/7147 | 930,411,004 | 7,147 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
The query does not execute because an internal error (crash) occurs.
**To Reproduce**
To reproduce just run the query below:
select c, count(distinct b) from (
select 1 a, 'a' b, 'A' c
union all select 2, 'b', 'C'
union all select 3, 'c', 'C'
union all select 4, 'a', 'A'
union all select 5, 'c', 'B'
union all select 6, 'd', 'A'
union all select 7, 'a', 'B'
union all select 8, 'b', 'D'
union all select 9, null, 'D'
) T
group by rollup(c)
**Screenshots**

**Software versions**
- MonetDB version number [MonetDB 5 server v11.39.17 (Oct2020-SP5)]
- OS and version: [Windows 10 Pro 64 bits - 20H2]
- Installed from release package | Query produces this error: !ERROR: Could not find %102.%102 | https://api.github.com/repos/MonetDB/MonetDB/issues/7146/comments | 2 | 2021-06-24T21:53:45Z | 2024-06-27T13:16:03Z | https://github.com/MonetDB/MonetDB/issues/7146 | 929,639,657 | 7,146 |
[
"MonetDB",
"MonetDB"
] |
[BondPricesWithNulls.txt](https://github.com/MonetDB/MonetDB/files/6708572/BondPricesWithNulls.txt)
I think I discovered a bug in MonetDB v11.39.17 (Oct2020-SP5).
I attached the sql to re-create the table.
The query:
```
select
count(distinct "t27"."Category") as "c29_category__unique_count__2"
from
(
select *
from "BondPricesWithNulls" as "t6"
where
(
"Category" = 'Supranational'
)
)
as "t27"
group by "t27"."Category"
```
returns :
```
c29_category__unique_count__2
-----------------------------
94
```
Although the correct value is clearly 1. Similar issues happen with non text columns.
The database can be tricked by changing the select list into
` count(distinct concat('',"t27"."Category")) as "c29_category__unique_count__2"`
which is should be equivalent, yet with that change the correct result is retuned.
If if you can think of a better work-around for this issue, it would be greatly appreciated.
| count(distinct ) not working - distinct directive ignored ? | https://api.github.com/repos/MonetDB/MonetDB/issues/7145/comments | 1 | 2021-06-24T10:47:44Z | 2024-06-27T13:16:02Z | https://github.com/MonetDB/MonetDB/issues/7145 | 929,097,650 | 7,145 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
When inserting data from a INT column into a BIGINT column, INT is automatically cast to BIGINT in some queries, but not in some other queries.
**To Reproduce**
Run the following queries:
```
create table t (i bigint, j bigint);
# works:
insert into t(i) select id from sys.functions;
# works:
insert into t(j) select id from sys.functions;
# Error "Append failed", mserver5 says: "BATappend2: !ERROR: Incompatible operands"
insert into t(i, j) select id, id from sys.functions;
# works:
insert into t(i, j) select cast(id as bigint), id from sys.functions;
# Error "Append failed", mserver5 says: "BATappend2: !ERROR: Incompatible operands"
insert into t(i, j) select id, cast(id as bigint) from sys.functions;
```
**Expected behavior**
Since MonetDB seems to do automatically up-casting, all the above queries should work.
**Software versions**
- MonetDB version number [Oct2020-SP5]
- OS and version: [e.g. Mac OS Catalina]
- Installed from release package
| Type up-casting (INT to BIGINT) doesn't always happen automatically | https://api.github.com/repos/MonetDB/MonetDB/issues/7144/comments | 5 | 2021-06-24T08:42:26Z | 2024-06-27T13:16:01Z | https://github.com/MonetDB/MonetDB/issues/7144 | 928,991,491 | 7,144 |
[
"MonetDB",
"MonetDB"
] | Hi,
We have issues running latest version of MonetDB (MonetDB 5 server 11.39.17 (Oct2020-SP5) (64-bit, 128-bit integers)) on Mac OS (Big Sur - 11.3.1 20E241).
We installed Monetdb from archive as per the instructions: https://www.monetdb.org/wiki/MonetDB:Installing_on_OS_X
When using mclient we get:
`server requires unknown hash`
When using JDBC driver we get:
```
org.monetdb.mcl.MCLException: no supported hash algorithms found in PROT10,COMPRESSION_LZ4
at org.monetdb.mcl.net.MapiSocket.getChallengeResponse(Unknown Source)
at org.monetdb.mcl.net.MapiSocket.connect(Unknown Source)
at org.monetdb.mcl.net.MapiSocket.connect(Unknown Source)
at org.monetdb.jdbc.MonetConnection.<init>(Unknown Source)
at org.monetdb.jdbc.MonetDriver.connect(Unknown Source)
at net.sourceforge.squirrel_sql.fw.sql.SQLDriverManager.getConnection(SQLDriverManager.java:141)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:136)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.lambda$execute$0(OpenConnectionCommand.java:93)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
I tried to set the hash JDBC property in the connection options, but it gives the following error:
```
Unexpected Error occurred attempting to open an SQL connection.
class org.monetdb.mcl.MCLException:
MALException:checkCredentials:hash 'SHA512' backend not found
```
This is the output of the jdbc debug file:
```
RD 1623664713194: read final block: 55 bytes
RX 1623664713194: FKkFu9YqlV:mserver:9:PROT10,COMPRESSION_LZ4:LIT:SHA512:
RD 1623664713195: inserting prompt
TD 1623664713195: write final block: 167 bytes
TX 1623664713195: BIG:monetdb:{SHA512}faf414bf395a4587bbb9e0b2732d589a2f6c59e0f3a838b01be3e17e3c491d9e379062b0075d1473e51fbd693ed52bd46e962d5ff975af8812f5af9015c33a4b:sql:DBFarm_2021_1:
RD 1623664713195: read final block: 63 bytes
RX 1623664713196: !MALException:checkCredentials:hash 'SHA512' backend not found
RD 1623664713196: inserting prompt
```
This thread: https://stackoverflow.com/questions/65840138/monetdb-server-requires-unknown-hash suggests a problem with OpenSSL and the installation of a library. Unfortunately, neither compilation from source or the installation of a library via a package manager is applicable for our use case.
This also appears in standard error:
`# MonetDB/SQL module loaded MonetDB was built without OpenSSL, but what you are trying to do requires it.
#2021-06-14 10:44:49: client1: createExceptionInternal: !ERROR: MALException:checkCredentials:hash 'SHA512' backend not found`
Is this behaviour intentional? Is there any solution to this problem where we can use the monetdb binary (available from your website) without the need to install OpenSSL from source or package manager?
Any help would be greatly appreciated.
| Unable to connect to MonetDB on Mac Big Sur | https://api.github.com/repos/MonetDB/MonetDB/issues/7143/comments | 20 | 2021-06-14T10:08:37Z | 2021-08-12T07:33:23Z | https://github.com/MonetDB/MonetDB/issues/7143 | 920,255,900 | 7,143 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
A clear and concise description of what the bug is.
The parser allows any function type to return a table, but it should be limited to regular functions only.
I am going to push the fix in the next few limits.
**To Reproduce**
Create a setting with minimal input for an external user to demonstrate the buggy behavior.
This includes the relevant part of the database schema description.
Performance trace of the rogue query (using the TRACE command)
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Software versions**
- MonetDB version number [a milestone label]
- OS and version: [e.g. Ubuntu 18.04]
- Installed from release package or self-installed and compiled
**Issue labeling **
Make liberal use of the labels to characterise the issue topics. e.g. identify severity, version, etc..
**Additional context**
Add any other context about the problem here.
| Aggregates returning tables should not be allowed | https://api.github.com/repos/MonetDB/MonetDB/issues/7142/comments | 2 | 2021-06-11T09:29:41Z | 2024-06-27T13:15:59Z | https://github.com/MonetDB/MonetDB/issues/7142 | 918,498,834 | 7,142 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
COUNT(DISTINCT col) does not calculate correctly distinct values
**To Reproduce**
```
drop table if exists test;
create table test (id int, version int);
insert into test values
(1,1),
(1,1),
(1,2),
(1,2),
(2,1),
(2,2),
(2,2),
(3,4),
(3,4);
SELECT
id,
version,
COUNT(distinct version)
FROM test
GROUP BY
id,
version
HAVING
COUNT(distinct version) > 1;
```
**Expected behavior**
The select above should return 0 rows as a result, because with group by by both id and version there is no way count(distinct version) could return anything else than 1. Also, the number reported by count(distinct version) in the result is wrong.
**Software versions**
MonetDB v11.39.17 (Oct2020-SP5) in docker container, running on linux host | COUNT(DISTINCT col) does not calculate correctly distinct values | https://api.github.com/repos/MonetDB/MonetDB/issues/7141/comments | 2 | 2021-06-11T07:37:09Z | 2024-06-27T13:15:59Z | https://github.com/MonetDB/MonetDB/issues/7141 | 918,347,963 | 7,141 |
[
"MonetDB",
"MonetDB"
] | Hi, we have a legacy MonetDB 11.29.4 installation on Debian Stretch that we want to upgrade to 11.39.18. However one of our queries is suffering major performance issues due to what appears to be an incorrect ordering of select and project operations in the SQL query plan when computed columns are present in the view and used in the query.
To reproduce in an empty DB:
**Preparation**
`drop table plantest0;`
`drop table plantest1;`
`drop table plantest2;`
`drop table plantest3;`
`drop table plantest4;`
`drop table plantest5;`
`drop table plantest6;`
`drop table plantest7;`
`drop table plantest8;`
`drop table plantest9;`
`create table "sys"."plantest0" ("id" bigint);`
`create table "sys"."plantest1" ("id" bigint);`
`create table "sys"."plantest2" ("id" bigint);`
`create table "sys"."plantest3" ("id" bigint);`
`create table "sys"."plantest4" ("id" bigint);`
`create table "sys"."plantest5" ("id" bigint);`
`create table "sys"."plantest6" ("id" bigint);`
`create table "sys"."plantest7" ("id" bigint);`
`create table "sys"."plantest8" ("id" bigint);`
`create table "sys"."plantest9" ("id" bigint);`
`drop procedure plantestp;`
`create procedure plantestp()`
`begin`
` declare rowindex bigint;`
` set rowindex = 0;`
` while rowindex < 10000000 do`
` insert into sys.plantest0 (id) values (100000000 + rowindex);`
` insert into sys.plantest1 (id) values (110000000 + rowindex);`
` insert into sys.plantest2 (id) values (120000000 + rowindex);`
` insert into sys.plantest3 (id) values (130000000 + rowindex);`
` insert into sys.plantest4 (id) values (140000000 + rowindex);`
` insert into sys.plantest5 (id) values (150000000 + rowindex);`
` insert into sys.plantest6 (id) values (160000000 + rowindex);`
` insert into sys.plantest7 (id) values (170000000 + rowindex);`
` insert into sys.plantest8 (id) values (180000000 + rowindex);`
` insert into sys.plantest9 (id) values (190000000 + rowindex);`
` set rowindex = rowindex + 1;`
` end while;`
`end;`
`call plantestp();`
`drop view plantestv;`
`create view plantestv as`
` select`
` v.*, v.id / 10000000 as id_div`
` from`
` (`
` select * from plantest0 union all`
` select * from plantest1 union all`
` select * from plantest2 union all`
` select * from plantest3 union all`
` select * from plantest4 union all`
` select * from plantest5 union all`
` select * from plantest6 union all`
` select * from plantest7 union all`
` select * from plantest8 union all`
` select * from plantest9`
` ) as v;`
**Query 1 - No Computed Column in View Used**
`select`
` id_r * 10000000 as id_range_base,`
` count(id_r) as nrows`
`from`
` (select`
` id / 10000000`
` from`
` plantestv v`
` where`
` id >= 150000000`
` ) as t (id_r)`
` group by`
` id_r`
` order by`
` id_r asc;`
**Query 2 - Computed Column in View Used**
`select`
` id_r * 10000000 as id_range_base,`
` count(id_r) as nrows`
`from`
` (select`
` id_div`
` from`
` plantestv v`
` where`
` id >= 150000000`
` ) as t (id_r)`
` group by`
` id_r`
` order by`
` id_r asc;`
**Results**
In 11.29.4 query 1 and 2 have essentially the same query plan and run in about the same time. In 11.39.18 query 1 has an optimal plan similar to that in 11.29.4 but in 11.39.18 query 2 has a very poor query plan where the computed column is projected before the select to narrow the query. This issue is preventing us from upgrading at the moment because we have a lot of queries like this, and with MonetDB 11.39.18 the performance is just too poor.
Thanks for your time, Graham.
| SQL Query Plan Non Optimal with View | https://api.github.com/repos/MonetDB/MonetDB/issues/7140/comments | 1 | 2021-06-08T21:47:32Z | 2024-06-27T13:15:58Z | https://github.com/MonetDB/MonetDB/issues/7140 | 915,554,156 | 7,140 |
[
"MonetDB",
"MonetDB"
] | Hi MonetDB dev team,
When I tryout the Python_map UDF functionality, the PYTHON_MAP UDF aggr_group variable has wrong data. It does not contain the aggr group id of each item.
My data contains a table having 800.000 rows and a "career" column having 20 distinct values.
My query is: select career, python_aggr(id) from fhs_product_dim group by career;
In the MonetDB sql explain, python UDF call is:
| X_29:bat[:lng] := pyapi3map.subeval_aggr(0x55d9a82aa680:ptr, "{ # My python code }":str, X_23:bat[:lng], X_25:bat[:oid], C_26:bat[:o :
: id], true:bit); :
X_23: correctly has "id" of all rows which are what I want to do computation on.
X_25: incorrectly has 76 items (it should have 800.000 aggr group id)
The reason it returns 76 items is because monetdb run the query in 4 parts, 3 of them return 20 and the other returns 16.
The sql explain:
sql>explain select career, python_aggr(id) from fhs_product_dim group by career;
+------------------------------------------------------------------------------+
| mal |
+==============================================================================+
| function user.main():void; |
| X_1:void := querylog.define("explain select career, python_sum(id) from |
: fhs_product_dim group by career;":str, "default_pipe":str, 23:int); :
| X_34:bat[:str] := bat.pack("sys.fhs_product_dim":str, "sys.%1":str); |
| X_35:bat[:str] := bat.pack("career":str, "%1":str); |
| X_36:bat[:str] := bat.pack("varchar":str, "bigint":str); |
| X_37:bat[:int] := bat.pack(255:int, 64:int); |
| X_38:bat[:int] := bat.pack(0:int, 0:int); |
| X_4:int := sql.mvc(); |
################################################################################
##### C_88GET oid of the first 1/4 of the table (200.000 items)
################################################################################
| C_88:bat[:oid] := sql.tid(X_4:int, "sys":str, "fhs_product_dim":str, 0:i |
: nt, 4:int); :
################################################################################
##### X_107: career of the first 1/4 of the table (200.000 items)
################################################################################
| X_107:bat[:str] := sql.bind(X_4:int, "sys":str, "fhs_product_dim":str, " |
: career":str, 0:int, 0:int, 4:int); :
################################################################################
##### X_124: mapping oid-career of the 200.000 items.
################################################################################
| X_124:bat[:str] := algebra.projection(C_88:bat[:oid], X_107:bat[:str]); |
| (X_131:bat[:oid], C_132:bat[:oid]) := group.groupdone(X_124:bat[:str]); |
################################################################################
##### X_134: has 20 items, and this X_134 is gathered later to form aggr_group variable.
################################################################################
| X_134:bat[:str] := algebra.projection(C_132:bat[:oid], X_124:bat[:str]); |
| C_90:bat[:oid] := sql.tid(X_4:int, "sys":str, "fhs_product_dim":str, 1:i |
: nt, 4:int); :
| X_108:bat[:str] := sql.bind(X_4:int, "sys":str, "fhs_product_dim":str, " |
: career":str, 0:int, 1:int, 4:int); :
| X_125:bat[:str] := algebra.projection(C_90:bat[:oid], X_108:bat[:str]); |
| (X_135:bat[:oid], C_136:bat[:oid]) := group.groupdone(X_125:bat[:str]); |
| X_138:bat[:str] := algebra.projection(C_136:bat[:oid], X_125:bat[:str]); |
| C_92:bat[:oid] := sql.tid(X_4:int, "sys":str, "fhs_product_dim":str, 2:i |
: nt, 4:int); :
| X_109:bat[:str] := sql.bind(X_4:int, "sys":str, "fhs_product_dim":str, " |
: career":str, 0:int, 2:int, 4:int); :
| X_126:bat[:str] := algebra.projection(C_92:bat[:oid], X_109:bat[:str]); |
| (X_139:bat[:oid], C_140:bat[:oid]) := group.groupdone(X_126:bat[:str]); |
| X_142:bat[:str] := algebra.projection(C_140:bat[:oid], X_126:bat[:str]); |
| C_94:bat[:oid] := sql.tid(X_4:int, "sys":str, "fhs_product_dim":str, 3:i |
: nt, 4:int); :
| X_110:bat[:str] := sql.bind(X_4:int, "sys":str, "fhs_product_dim":str, " |
: career":str, 0:int, 3:int, 4:int); :
| X_127:bat[:str] := algebra.projection(C_94:bat[:oid], X_110:bat[:str]); |
| (X_143:bat[:oid], C_144:bat[:oid]) := group.groupdone(X_127:bat[:str]); |
| X_146:bat[:str] := algebra.projection(C_144:bat[:oid], X_127:bat[:str]); |
| X_161:bat[:str] := mat.packIncrement(X_134:bat[:str], 4:int); |
| X_163:bat[:str] := mat.packIncrement(X_161:bat[:str], X_138:bat[:str]); |
| X_164:bat[:str] := mat.packIncrement(X_163:bat[:str], X_142:bat[:str]); |
| X_24:bat[:str] := mat.packIncrement(X_164:bat[:str], X_146:bat[:str]); |
| (X_25:bat[:oid], C_26:bat[:oid]) := group.groupdone(X_24:bat[:str]); |
| X_28:bat[:str] := algebra.projection(C_26:bat[:oid], X_24:bat[:str]); |
| X_95:bat[:lng] := sql.bind(X_4:int, "sys":str, "fhs_product_dim":str, "i |
: d":str, 0:int, 0:int, 4:int); :
| X_120:bat[:lng] := algebra.projection(C_88:bat[:oid], X_95:bat[:lng]); |
| X_96:bat[:lng] := sql.bind(X_4:int, "sys":str, "fhs_product_dim":str, "i |
: d":str, 0:int, 1:int, 4:int); :
| X_121:bat[:lng] := algebra.projection(C_90:bat[:oid], X_96:bat[:lng]); |
| X_97:bat[:lng] := sql.bind(X_4:int, "sys":str, "fhs_product_dim":str, "i |
: d":str, 0:int, 2:int, 4:int); :
| X_122:bat[:lng] := algebra.projection(C_92:bat[:oid], X_97:bat[:lng]); |
| X_98:bat[:lng] := sql.bind(X_4:int, "sys":str, "fhs_product_dim":str, "i |
: d":str, 0:int, 3:int, 4:int); :
| X_123:bat[:lng] := algebra.projection(C_94:bat[:oid], X_98:bat[:lng]); |
| X_166:bat[:lng] := mat.packIncrement(X_120:bat[:lng], 4:int); |
| X_167:bat[:lng] := mat.packIncrement(X_166:bat[:lng], X_121:bat[:lng]); |
| X_168:bat[:lng] := mat.packIncrement(X_167:bat[:lng], X_122:bat[:lng]); |
| X_23:bat[:lng] := mat.packIncrement(X_168:bat[:lng], X_123:bat[:lng]); |
################################################################################
##### X_25 is invalid, it is computed from the result of groupdone of "career".
##### While it should contains the group id of each item.
################################################################################
| X_29:bat[:lng] := pyapi3map.subeval_aggr(0x55d9a82aa680:ptr, "{
# My python code };":str, X_23:bat[:lng], X_25:bat[:oid], C_26:bat[:o :
: id], true:bit); :
| sql.resultSet(X_34:bat[:str], X_35:bat[:str], X_36:bat[:str], X_37:bat[: |
: int], X_38:bat[:int], X_28:bat[:str], X_29:bat[:lng]); :
| end user.main; |
Please let me know what I should do more to assist you in resolving the bug.
Thank you. | PYTHON_MAP UDF aggr_group variable has wrong data | https://api.github.com/repos/MonetDB/MonetDB/issues/7139/comments | 6 | 2021-06-04T10:58:13Z | 2023-09-13T14:55:26Z | https://github.com/MonetDB/MonetDB/issues/7139 | 911,385,568 | 7,139 |
[
"MonetDB",
"MonetDB"
] | Hi MonetDB dev,
I encounter this MonetDB crash when I try out the Python UDF.
Test data:
sql>select * from integers;
+------+
| i |
+======+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
| 10 |
+------+
sql> select python_sum(i) from integers where i = 1 group by i; ==> segfault
python_sum is just a simple PYTHON_MAP function: return numpy.sum(v);
When I rebuild MonetDB with debuginfo, this is where the code fails.
pyapi3.c
for (element_it = 0; element_it < elements; element_it++) {
group_counts[aggr_group_arr[element_it]]++; ==> aggr_group_arr is null
}
I hope you can fix it soon. Thank you. | Monetdb Python UDF crashes because of null aggr_group_arr | https://api.github.com/repos/MonetDB/MonetDB/issues/7138/comments | 3 | 2021-06-04T10:52:53Z | 2024-06-27T13:15:57Z | https://github.com/MonetDB/MonetDB/issues/7138 | 911,381,988 | 7,138 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I am using the MonetDB JDBC driver to load in TPC-C data using OLTPBench (I know that MonetDB and column stores are not designed for TPC-C --- I'm just setting it up this way for research purposes). While loading the data, mserver5 runs into a segmentation fault:
`[Mon May 31 20:51:02 2021] mserver5[7524]: segfault at 7f44f8296ff8 ip 00007f450672b94a sp 00007f44f8297000 error 6 in libmonetdb5.so.30.0.7[7f45066a3000+24c000`
I can reproduce this issue consistently on MonetDB 11.39.17 on release and debug builds. I have uploaded a coredump for the segfault here: https://drive.google.com/file/d/1pcSbrUnC7CUFHWEJsWspZBXRFqU1_-xr/view?usp=sharing
It looks like there is a stack overflow in the MALInterpreter. Here are some (truncated) lines from the stack in the core dump:
```
#43519 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43520 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43521 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43522 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43523 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43524 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43525 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43526 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43527 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43528 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43529 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43530 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43531 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43532 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43533 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43534 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43535 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43536 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43537 0x00007f450672b94f in advanceQRYqueue () at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:177
#43538 0x00007f450672c706 in runtimeProfileInit (cntxt=0xd12350, mb=0x7f448860c180, stk=0x7f44885a56c0) at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_runtime.c:279
#43539 0x00007f4506735ec0 in runMALsequence (cntxt=0xd12350, mb=0x7f448860c180, startpc=1, stoppc=0, stk=0x7f44885a56c0, env=0x0, pcicaller=0x0) at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_interpreter.c:503
#43540 0x00007f450673513d in runMAL (cntxt=0xd12350, mb=0x7f448860c180, mbcaller=0x0, env=0x0) at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_interpreter.c:334
#43541 0x00007f44fc225e1e in SQLrun (c=0xd12350, m=0x7f448810a580) at /hdd1/MonetDB-11.39.17/sql/backends/monet5/sql_execute.c:259
#43542 0x00007f44fc227273 in SQLengineIntern (c=0xd12350, be=0x7f44882077b0) at /hdd1/MonetDB-11.39.17/sql/backends/monet5/sql_execute.c:632
#43543 0x00007f44fc224c2f in SQLengine (c=0xd12350) at /hdd1/MonetDB-11.39.17/sql/backends/monet5/sql_scenario.c:1228
#43544 0x00007f4506754b75 in runPhase (c=0xd12350, phase=4) at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_scenario.c:449
#43545 0x00007f4506754cdf in runScenarioBody (c=0xd12350, once=0) at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_scenario.c:475
#43546 0x00007f4506754f02 in runScenario (c=0xd12350, once=0) at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_scenario.c:507
#43547 0x00007f4506756b5c in MSserveClient (c=0xd12350) at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_session.c:485
#43548 0x00007f4506756620 in MSscheduleClient (command=0x7f44880008d0 "", challenge=0x7f44f8495dc3 "wgwSdaKzk", fin=0x7f44880028f0, fout=0x7f448c00ea10, protocol=PROTOCOL_9, blocksize=8190) at /hdd1/MonetDB-11.39.17/monetdb5/mal/mal_session.c:372
#43549 0x00007f4506804014 in doChallenge (data=0x7f448c0008d0) at /hdd1/MonetDB-11.39.17/monetdb5/modules/mal/mal_mapi.c:269
#43550 0x00007f4505f79c23 in THRstarter (a=0x7f448c010f60) at /hdd1/MonetDB-11.39.17/gdk/gdk_utils.c:1490
#43551 0x00007f4505fe82b9 in thread_starter (arg=0x7f448c010fd0) at /hdd1/MonetDB-11.39.17/gdk/gdk_system.c:776
#43552 0x00007f45056d86ba in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#43553 0x00007f450540e41d in clone () from /lib/x86_64-linux-gnu/libc.so.6
```
**To Reproduce**
- Download OLTPBench: https://github.com/oltpbenchmark/oltpbench
- Run mserver5 (/usr/local/monetdb/bin/mserver5 --dbpath=/hdd2/monet_chbench/tpcc --set monet_vault_key=/hdd2/monet_chbench/tpcc/.vaultkey --set mapi_listenaddr=all)
- Use OLTPBench to create a TPC-C database schema for MonetDB
- Use OLTPBench to load TPC-C database (scale factor 10) into MonetDB (This will take a while).
- Observe a segmentation fault as shown above during load
**Expected behavior**
- The data loading should complete successfully
**Software versions**
- MonetDB 11.39.17 Version:
```
MonetDB 5 server 11.39.17 (64-bit, 128-bit integers)
This is an unreleased version
Copyright (c) 1993 - July 2008 CWI
Copyright (c) August 2008 - 2021 MonetDB B.V., all rights reserved
Visit https://www.monetdb.org/ for further information
Found 31.4GiB available memory, 24 available cpu cores
Libraries:
libpcre: 8.38 2015-11-23
openssl: OpenSSL 1.0.2g 1 Mar 2016
libxml2: 2.9.3
Compiled by: bjglasbe@tem101 (x86_64-pc-linux-gnu)
Compilation: /usr/bin/cc -Werror -Wall -Wextra -Werror-implicit-function-declaration -Wpointer-arith -Wundef -Wformat=2 -Wformat-overflow=1 -Wno-format-truncation -Wno-format-nonliteral -Wno-cast-function-type -Winit-self -Winvalid-pch -Wmissing-declarations -Wmissing-format-attribute -Wmissing-prototypes -Wno-missing-field-initializers -Wold-style-definition -Wpacked -Wunknown-pragmas -Wvariadic-macros -Wstack-protector -fstack-protector-all -Wpacked-bitfield-compat -Wsync-nand -Wjump-misses-init -Wmissing-include-dirs -Wlogical-op -Wduplicated-cond -Wduplicated-branches -Wrestrict -Wnested-externs -Wmissing-noreturn -Wuninitialized -Wno-char-subscripts -Wunreachable-code
Linking : /usr/bin/ld
```
- Operating System:
```
$ uname -a
Linux tem101 4.15.0-48-generic #51~16.04.1-Ubuntu SMP Fri Apr 5 12:01:12 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -cs
xenial
```
- Self-compiled using 11.39.17 tarball:
```
$ g++ --version
g++ (Ubuntu 8.4.0-1ubuntu1~16.04.1) 8.4.0
$ echo $CONFIGURE_OPTIONS
-DCMAKE_BUILD_TYPE=Debug -DASSERT=ON -DSTRICT=ON
```
Please let me know if you need any other information! | Segmentation fault while loading data | https://api.github.com/repos/MonetDB/MonetDB/issues/7137/comments | 6 | 2021-06-01T18:08:33Z | 2024-06-27T13:15:56Z | https://github.com/MonetDB/MonetDB/issues/7137 | 908,560,823 | 7,137 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
MERGE statement is deleting rows if the column is set as NOT NULL even though it should not.
**To Reproduce**
```
drop table sys.v;
CREATE TABLE "sys"."v" (
"id" INTEGER NOT NULL,
"version" INTEGER NOT NULL,
"value" DOUBLE
);
insert into sys."v" values (1,1622470128,1);
insert into sys."v" values (2,1622470128,2);
insert into sys."v" values (3,1622470128,3);
insert into sys."v" values (4,1622470128,4);
insert into sys."v" values (5,1622470128,5);
MERGE INTO sys."v" dst
USING (SELECT id, version FROM (SELECT id, version, DENSE_RANK() OVER (PARTITION BY id ORDER BY id, version DESC) AS rn FROM sys."v")t WHERE rn > 1 ) src
ON src.id = dst.id AND src.version = dst.version
WHEN MATCHED THEN DELETE;
```
Above merge statement deletes all rows in the table, even if it should not.
**Expected behavior**
```
drop table sys.v;
CREATE TABLE "sys"."v" (
"id" INTEGER,
"version" INTEGER,
"value" DOUBLE
);
insert into sys."v" values (1,1622470128,1);
insert into sys."v" values (2,1622470128,2);
insert into sys."v" values (3,1622470128,3);
insert into sys."v" values (4,1622470128,4);
insert into sys."v" values (5,1622470128,5);
MERGE INTO sys."v" dst
USING (SELECT id, version FROM (SELECT id, version, DENSE_RANK() OVER (PARTITION BY id ORDER BY id, version DESC) AS rn FROM sys."v")t WHERE rn > 1 ) src
ON src.id = dst.id AND src.version = dst.version
WHEN MATCHED THEN DELETE;
```
If the column is NOT set as NOT NULL, the statement works as expected.
**Software versions**
Linux, Docker, latest version od MDB (MonetDB v11.39.17 (Oct2020-SP5))
| MERGE statement is deleting rows if the column is set as NOT NULL even though it should not | https://api.github.com/repos/MonetDB/MonetDB/issues/7136/comments | 7 | 2021-05-31T14:30:32Z | 2024-06-27T13:15:55Z | https://github.com/MonetDB/MonetDB/issues/7136 | 907,522,270 | 7,136 |
[
"MonetDB",
"MonetDB"
] | At the moment, this is the documentation we get with `\?` at the mclient prompt:
```
\? - show this message
\<file - read input from file
\>file - save response in file, or stdout if no file is given
\|cmd - pipe result to process, or stop when no command is given
\history - show the readline history
\help - synopsis of the SQL syntax
\D table - dumps the table, or the complete database if none given.
\d[Stvsfn]+ [obj] - list database objects, or describe if obj given
\A - enable auto commit
\a - disable auto commit
\e - echo the query in sql formatting mode
\t - set the timer {none,clock,performance} (none is default)
\f - format using renderer {csv,tab,raw,sql,xml,trash,rowcount,expanded}
\w# - set maximal page width (-1=unlimited, 0=terminal width, >0=limit to num)
\r# - set maximum rows per page (-1=raw)
\L file - save client-server interaction
\X - trace mclient code
\q - terminate session and quit mclient
```
This is in sufficient, because:
* Inexperienced users may not understand what the single-sentence explanation means.
* Any user might forget the exact semantics. Example: For `\<`, can file be a path or just a plain file? Is an extension appended if none is supplied? Are all commands in the file executed as a single transaction? etc.
* Some extra parameters are not explained, such as the second letter of the \d command.
If one tries to get expanded help for one of these commands by inputting: `\? \d` or `\h \<` - either the general help is printed or nothing happens.
I suggest that per-command help be written, similarly to the `\h` command for SQL commands; and that a link be added to the help text for further information.
**Software versions**
- MonetDB 11.39.11
- Devuan GNU/Linux Beowulf
- Built from source
| Need better documentation for mclient backslash commands | https://api.github.com/repos/MonetDB/MonetDB/issues/7135/comments | 1 | 2021-05-26T15:18:29Z | 2024-06-27T13:15:54Z | https://github.com/MonetDB/MonetDB/issues/7135 | 902,549,326 | 7,135 |
[
"MonetDB",
"MonetDB"
] | Consider the following commands on an empty DB and their output:
```
sql>CREATE TABLE t1 AS SELECT 1 AS c1, 2 AS c2;
operation successful
sql>\d t1
CREATE TABLE "sys"."t1" (
"c1" TINYINT,
"c2" TINYINT
);
sql>\d
TABLE sys.t1
sql>SELECT * FROM sys.t1;
+------+------+
| c1 | c2 |
+======+======+
| 1 | 2 |
+------+------+
1 tuple
```
The `t1` table is added to the `sys` schema. I believe it is not reasonable for it to be in the same schema as the actually-system-internal tables. So - please don't put it in there.
This can be done in one of two ways:
1. Allow for tables with a null or empty schema (not sure whether the SQL standard supports this).
2. Choose a default schema that is not `sys`. For example: `default`. Or perhaps the name of the database.
**Expected behavior**
```
sql>CREATE TABLE t1 AS SELECT 1 AS c1, 2 AS c2;
operation successful
sql>\d t1
CREATE TABLE "t1" (
"c1" TINYINT,
"c2" TINYINT
);
sql>\d
TABLE t1
sql>SELECT * FROM sys.t1;
SELECT: no such table 'sys.t2'
```
or
```
sql>CREATE TABLE t1 AS SELECT 1 AS c1, 2 AS c2;
operation successful
sql>\d t1
CREATE TABLE "mydb"."t1" (
"c1" TINYINT,
"c2" TINYINT
);
sql>\d
TABLE t1
sql>SELECT * FROM sys.t1;
SELECT: no such table 'sys.t2'
```
**Software versions**
- MonetDB 11.39.11
- Devuan GNU/Linux Beowulf
- Built from source | Tables added without actively setting a schema shouldn't be placed in the sys schema | https://api.github.com/repos/MonetDB/MonetDB/issues/7134/comments | 11 | 2021-05-26T14:22:48Z | 2022-09-30T14:04:51Z | https://github.com/MonetDB/MonetDB/issues/7134 | 902,470,455 | 7,134 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I have a query in the form of `WITH <alias> ( SELECT x ) DELETE FROM <subquery with join with alias>`. It should find and delete all tuples in a table that do not have a corresponding tuple in another table.
Problem 1: this query deletes some of the qualified tuples but not all, and it also deletes some of the not qualified tuples.
Problem 2: the test query always deletes 16 tuples, so, when the table contains, e.g. 12 tuples, it corrupts the database: `16 affected rows`, but `count(*)` returns -4 while `sys.storage()` returns 12 for count-per-column.
**To Reproduce**
Problem 1. the following queries show that wrong tuples were deleted:
```
DROP TABLE IF EXISTS test;
CREATE TABLE test (ID BIGINT, UUID STRING, sec BIGINT, data VARCHAR(1000));
INSERT INTO test VALUES
(1000000, 'uuid0000', 1621539934, 'a0' )
, (1000001, 'uuid0001', 1621539934, 'a1' )
, (1000002, 'uuid0002', 1621539934, 'a2' )
, (1000003, 'uuid0003', 1621539934, 'a3' );
DROP TABLE IF EXISTS extra;
CREATE TABLE extra (ID BIGINT, UUID STRING, sec BIGINT, extra VARCHAR(1000));
INSERT INTO extra VALUES
(1000009, 'uuid0009', 1621539934, 'a9' )
, (1000009, 'uuid0009', 1621539934, 'a9' )
, (1000009, 'uuid0009', 1621539934, 'a9' )
, (1000009, 'uuid0009', 1621539934, 'a9' )
, (1000000, 'uuid0000', 1621539934, 'a0' )
, (1000000, 'uuid0000', 1621539934, 'a0' )
, (1000000, 'uuid0000', 1621539934, 'a0' )
, (1000000, 'uuid0000', 1621539934, 'a0' )
, (1000001, 'uuid0001', 1621539934, 'a1' )
, (1000001, 'uuid0001', 1621539934, 'a1' )
, (1000001, 'uuid0001', 1621539934, 'a1' )
, (1000001, 'uuid0001', 1621539934, 'a1' )
, (1000002, 'uuid0002', 1621539934, 'a2' )
, (1000002, 'uuid0002', 1621539934, 'a2' )
, (1000002, 'uuid0002', 1621539934, 'a2' )
, (1000002, 'uuid0002', 1621539934, 'a2' )
, (1000003, 'uuid0003', 1621539934, 'a3' )
, (1000003, 'uuid0003', 1621539934, 'a3' )
, (1000003, 'uuid0003', 1621539934, 'a3' )
, (1000003, 'uuid0003', 1621539934, 'a3' );
-- validate 4x of each id.
SELECT id, COUNT(*) as cnt FROM extra GROUP BY id ORDER BY id;
WITH ca AS
(
SELECT e.ID as id
, e.UUID as uuid
, e.sec as sec
FROM extra AS e
LEFT OUTER JOIN test AS t
ON t.ID = e.ID
AND t.UUID = e.UUID
AND t.sec = e.sec
WHERE t.ID IS NULL
AND t.UUID IS NULL
AND t.sec IS NULL
)
DELETE
FROM extra AS i
WHERE i.id = ca.id
AND i.sec = ca.sec
AND i.uuid = ca.uuid
;
-- wrong tuples were deleted:
SELECT id, COUNT(*) as cnt FROM extra GROUP BY id ORDER BY id;
```
Problem 2. the following queries show more tuples were deleted than there were in the table
```
DROP TABLE IF EXISTS test;
CREATE TABLE test (ID BIGINT, UUID STRING, sec BIGINT, data VARCHAR(1000));
INSERT INTO test VALUES
(1000000, 'uuid0000', 1621539934, 'a0' )
, (1000003, 'uuid0003', 1621539934, 'a3' );
DROP TABLE IF EXISTS extra;
CREATE TABLE extra (ID BIGINT, UUID STRING, sec BIGINT, extra VARCHAR(1000));
INSERT INTO extra VALUES
(1000009, 'uuid0009', 1621539934, 'a9' )
, (1000009, 'uuid0009', 1621539934, 'a9' )
, (1000009, 'uuid0009', 1621539934, 'a9' )
, (1000009, 'uuid0009', 1621539934, 'a9' )
, (1000000, 'uuid0000', 1621539934, 'a0' )
, (1000000, 'uuid0000', 1621539934, 'a0' )
, (1000000, 'uuid0000', 1621539934, 'a0' )
, (1000000, 'uuid0000', 1621539934, 'a0' )
, (1000003, 'uuid0003', 1621539934, 'a3' )
, (1000003, 'uuid0003', 1621539934, 'a3' )
, (1000003, 'uuid0003', 1621539934, 'a3' )
, (1000003, 'uuid0003', 1621539934, 'a3' );
-- validate 4x of each id.
SELECT id, COUNT(*) as cnt FROM extra GROUP BY id ORDER BY id;
WITH ca AS
(
SELECT e.ID as id
, e.UUID as uuid
, e.sec as sec
FROM extra AS e
LEFT OUTER JOIN test AS t
ON t.ID = e.ID
AND t.UUID = e.UUID
AND t.sec = e.sec
WHERE t.ID IS NULL
AND t.UUID IS NULL
AND t.sec IS NULL
)
DELETE
FROM extra AS i
WHERE i.id = ca.id
AND i.sec = ca.sec
AND i.uuid = ca.uuid
; -- 16 affected rows
-- Count is -4
SELECT COUNT(*) FROM extra;
-- Count is 12
select table, column, type, count from sys.storage() where table = 'extra';
```
**Expected behaviour**
The DELETE query should only delete the four tuples with id = 1000009.
**Software versions**
- MonetDB version number [Jun2020-SP1, Jun2020 (hg id: ec64252a9d24), Oct2020-SP5]
- OS and version: [MacOS Catalina and Fedora 32]
- Installed from release package or self-installed and compiled: both
| WITH <alias> ( SELECT x ) DELETE FROM ... deletes wrong tuples | https://api.github.com/repos/MonetDB/MonetDB/issues/7133/comments | 2 | 2021-05-25T12:31:09Z | 2024-06-27T13:15:53Z | https://github.com/MonetDB/MonetDB/issues/7133 | 900,744,620 | 7,133 |
[
"MonetDB",
"MonetDB"
] | When you're doing data cleaning, you typically apply a bunch of UPDATE and DELETE queries with a certain condition after determining those records need correction or removal. But it is also relatively common that you wish to apply the same criterion in other tables - possibly related ones. It would therefore be convenient, instead of writing:
```
DELETE FROM t1 WHERE cond;
DELETE FROM t2 WHERE cond;
```
to be able to write:
```
DELETE FROM t1, t2 WHERE cond;
```
and similarly for update.
The semantics of the second query would be identical to the first, assuming the condition on t1 doesn't involve t2. If it does, either this combined-deletion/update command can be discarded (easier), or it can be based on the state of both tables before the deletion/update (more convenient, and AFAICT well-defined).
This should not, AFAICT, interfere with proper SQL queries, as currently, adding the comma after the table identifier is a syntax error.
The output could be the overall number of records affected (easier) or the number of records affected in each table (more information, slightly more coding).
If this is supported, another level of this feature could be:
```
DELETE FROM schema1.* WHERE cond;
```
which is the equivalent of splicing all tables in the schema with commas.
**Software versions**
- MonetDB 11.39.11
- Devuan GNU/Linux Beowulf
- Built from source
| Support deletion from several tables with the same statement | https://api.github.com/repos/MonetDB/MonetDB/issues/7132/comments | 4 | 2021-05-25T08:39:29Z | 2024-06-28T10:06:49Z | https://github.com/MonetDB/MonetDB/issues/7132 | 900,500,402 | 7,132 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I used my fuzzing tool to test monetdb , and found it return "TypeException:user.main[19]:'batcalc.between' undefined" when processing specific sql.
**To Reproduce**
1. monetdb destroy -f test_db
2. monetdb create test_db
3. monetdb release test_db
4. mclient -u monetdb -d test_db
5. sql>
`create table t_qh (
c_f INTEGER ,
c_y2 INTEGER ,
c_i768 INTEGER ,
c_tqx TEXT ,
c_mknkhml TEXT,
primary key(c_f, c_y2),
unique(c_y2)
);`
6. sql>
`create table t_ckfystsc (
c_kvhq5p INTEGER ,
c_aifpl INTEGER ,
c_jf6 TEXT ,
c_f31ix TEXT NOT NULL,
c_lo0zqfe TEXT ,
c_zv INTEGER ,
c_l153 INTEGER ,
primary key(c_zv),
unique(c_zv)
);`
7. sql>
`create view t_vehkuero as
select distinct
abs(
cast(subq_0.c1 as INTEGER)) as c0
from
(select
ref_0.c_f as c0,
ref_0.c_f as c1,
ref_0.c_i768 as c2,
ref_0.c_y2 as c3
from
t_qh as ref_0) as subq_0
where cast(nullif('zdo',
'apqonv1a') as CHAR) like 'aou%2' AND length(
cast(case when subq_0.c1 = (
select
subq_0.c3 as c0
from
t_qh as ref_6
where (subq_0.c0 <> (
select
subq_0.c0 as c0
from
t_qh as ref_7))
) then 'xk' else 'xk' end
as CHAR)) <> subq_0.c0;`
8. sql>
`select
case when subq_0.c0 in (
select
ref_5.c_tqx as c0
from
t_qh as ref_5
where ref_5.c_y2 in (
select
ref_6.c_y2 as c0
from
t_qh as ref_6
union
select
2 as c0
from
t_vehkuero as ref_7)) then cast(nullif(subq_0.c0,
subq_0.c0) as CHAR) else cast(nullif(subq_0.c0,
subq_0.c0) as CHAR) end
as c0
from
(select
ref_0.c_lo0zqfe as c0,
ref_0.c_aifpl as c1
from
t_ckfystsc as ref_0) as subq_0;`
7. "TypeException:user.main[273]:'bat.append' undefined in: bat.append(X_381:bat[:str], X_368:bat[:int], true:bit);" occur!
**Expected behavior**
return select result normally.
**Software versions**
- MonetDB Oct2020_17
- Ubuntu 18.04
| Bug report: TypeException:user.main[273]:'bat.append' undefined | https://api.github.com/repos/MonetDB/MonetDB/issues/7131/comments | 2 | 2021-05-20T12:37:51Z | 2024-06-27T13:15:52Z | https://github.com/MonetDB/MonetDB/issues/7131 | 896,766,652 | 7,131 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I used my fuzzing tool to test monetdb , and found it return "TypeException:user.main[19]:'batcalc.between' undefined" when processing specific sql.
**To Reproduce**
1. monetdb destroy -f test_db
2. monetdb create test_db
3. monetdb release test_db
4. mclient -u monetdb -d test_db
5. sql>
`create table t_qh (
c_f INTEGER ,
c_y2 INTEGER ,
c_i768 INTEGER ,
c_tqx INTEGER ,
primary key(c_i768),
unique(c_y2)
);`
6. sql>
`create view t_amy as
select
ref_1.c_f as c0
from
t_qh as ref_0
cross join t_qh as ref_1
left outer join (select
ref_2.c_i768 as c0,
sum(
cast(ref_3.c_f as INTEGER)) as c1,
count(*) as c2
from
t_qh as ref_2
left outer join t_qh as ref_3
on (ref_2.c_f < ref_2.c_f)
where ref_2.c_i768 > ref_3.c_y2
group by ref_2.c_i768) as subq_0
on (ref_0.c_y2 < ref_1.c_f)
where ref_0.c_f <> (
select
ref_4.c_i768 as c0
from
t_qh as ref_4
cross join t_qh as ref_5
where EXISTS (
select
ref_6.c_i768 as c0,
ref_1.c_i768 as c1,
ref_6.c_y2 as c2
from
t_qh as ref_6
where subq_0.c0 < (
select
subq_0.c1 as c0
from
t_qh as ref_7
where (ref_5.c_f <> ref_6.c_f)
or (subq_0.c1 is NULL))));`
7. sql>
`select
(subq_0.c0 = case when EXISTS (
select
ref_15.c0 as c6
from
t_amy as ref_15)
then subq_0.c1 else subq_0.c1 end) as c4
from
(select
ref_12.c_f as c0,
30 as c1
from
t_qh as ref_12) as subq_0
;`
7. "TypeException:user.main[396]:'algebra.join' undefined in: algebra.join(X_597:bat[:bte], X_705:bat[:hge], nil:BAT, nil:BAT, true:bit, nil:lng);" occur!
**Expected behavior**
return select result normally.
**Software versions**
- MonetDB Oct2020_17
- Ubuntu 18.04
| Bug report: TypeException:user.main[396]:'algebra.join' undefined | https://api.github.com/repos/MonetDB/MonetDB/issues/7130/comments | 1 | 2021-05-17T13:59:37Z | 2024-06-27T13:15:51Z | https://github.com/MonetDB/MonetDB/issues/7130 | 893,363,326 | 7,130 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I used my fuzzing tool to test monetdb , and found it return "TypeException:user.main[19]:'batcalc.between' undefined" when processing specific sql.
**To Reproduce**
1. monetdb destroy -f test_db
2. monetdb create test_db
3. monetdb release test_db
4. mclient -u monetdb -d test_db
5. sql>
`create table t_qh (
c_f INTEGER ,
c_y2 INTEGER ,
primary key(c_f),
unique(c_f)
);`
6. sql>
`WITH
cte_1 AS (select
count(
cast(87.53 as INTEGER)) as c0,
avg(
cast(abs(
cast(50.40 as INTEGER)) as INTEGER)) as c1,
subq_0.c0 as c2
from
(select distinct
ref_5.c_f as c0,
75 as c1,
ref_5.c_f as c2
from
t_qh as ref_5
where ref_5.c_f is not NULL) as subq_0
group by subq_0.c0)
select distinct
sum(
cast((case when (ref_23.c0 > ref_23.c0)
and (ref_23.c0 < ref_23.c1) then ref_23.c1 else ref_23.c1 end
& ref_23.c1) as INTEGER)) as c3,
ref_23.c0 as c4
from
cte_1 as ref_23
group by ref_23.c0;`
7. "TypeException:user.main[19]:'batcalc.between' undefined in: batcalc.between(X_188:bat[:lng], X_188:bat[:lng], X_190:bat[:dbl], false:bit, false:bit, false:bit, false:bit, false:bit);" occur!
**Expected behavior**
return select result normally.
**Software versions**
- MonetDB Oct2020_17
- Ubuntu 18.04
| Bug report: TypeException:user.main[19]:'batcalc.between' undefined | https://api.github.com/repos/MonetDB/MonetDB/issues/7129/comments | 1 | 2021-05-17T13:13:48Z | 2024-06-27T13:15:50Z | https://github.com/MonetDB/MonetDB/issues/7129 | 893,320,262 | 7,129 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I used my fuzzing tool to test monetdb , and found it return a strange error message "Subquery result missing" when processing specific sql.
**To Reproduce**
1. monetdb destroy -f test_db
2. monetdb create test_db
3. monetdb release test_db
4. mclient -u monetdb -d test_db
5. sql>
`create table t_qh (
c_f INTEGER ,
c_y2 INTEGER ,
c_i768 INTEGER ,
c_tqx TEXT ,
primary key(c_f, c_y2),
unique(c_y2)
);`
6. sql>
`select
ref_1.c_i768 as c0
from
t_qh as ref_1
cross join (select
ref_2.c_i768 as c0
from
t_qh as ref_2
inner join t_qh as ref_3
on (1=1)
where ref_3.c_f <> ref_3.c_y2) as subq_0
where ref_1.c_y2 < (
select
ref_1.c_f as c0
from
t_qh as ref_4
where (EXISTS (
select distinct
ref_5.c_i768 as c0
from
t_qh as ref_5)) and (ref_1.c_i768 between ref_4.c_y2 and ref_1.c_y2));`
7. "Subquery result missing" occur!
**Expected behavior**
return select result normally.
**Software versions**
- MonetDB Oct2020_17
- Ubuntu 18.04
| Bug report: strange error message "Subquery result missing" | https://api.github.com/repos/MonetDB/MonetDB/issues/7128/comments | 1 | 2021-05-17T12:56:23Z | 2024-06-27T13:15:49Z | https://github.com/MonetDB/MonetDB/issues/7128 | 893,303,890 | 7,128 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
I used my fuzzing tool to test monetdb , and found that a crash in mclient.
**To Reproduce**
1. monetdb destroy -f test_db
2. monetdb create test_db
3. monetdb release test_db
4. mclient -u monetdb -d test_db
5. sql>
`create table t_qh (
c_f INTEGER ,
c_y2 INTEGER ,
c_i768 INTEGER ,
c_tqx TEXT ,
c_mknkhml TEXT ,
primary key(c_f, c_y2),
unique(c_y2)
);`
6. sql>
`create table t_ckfystsc (
c_kvhq5p INTEGER ,
c_aifpl INTEGER ,
c_jf6 TEXT ,
c_f31ix TEXT NOT NULL,
c_lk0zqfa INTEGER ,
c_qotzuxn INTEGER ,
c_w_z TEXT ,
primary key(c_lk0zqfa),
unique(c_kvhq5p)
);`
7. sql>
`create table t_irwrntam7 as
select
ref_0.c_i768 as c0,
ref_0.c_i768 as c1,
ref_0.c_f as c2,
ref_0.c_i768 as c3
from
t_qh as ref_0
where (ref_0.c_f in (
select
ref_1.c_lk0zqfa as c0
from
t_ckfystsc as ref_1))
and (ref_0.c_y2 >= (
select
ref_0.c_y2 as c0
from
t_qh as ref_7
union
select
ref_0.c_f as c0
from
t_qh as ref_9));`
8. sql> (enter a "Enter")
9. "write error on stream" occur, and mclient crash.
**Expected behavior**
Create table t_irwrntam7 successfully.
**Software versions**
- MonetDB Oct2020_17
- Ubuntu 18.04
| Bug report: "write error on stream" that results in mclient crash | https://api.github.com/repos/MonetDB/MonetDB/issues/7127/comments | 1 | 2021-05-17T12:39:48Z | 2024-06-27T13:15:48Z | https://github.com/MonetDB/MonetDB/issues/7127 | 893,290,323 | 7,127 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
The "lower" and "upper" functions doesn't work for Cyrillic alphabet
**To Reproduce**
sql>select lower('ASDFasdfФЫВАфыва');
+------------------+
| %2 |
+==================+
| asdfasdfФЫВАфыва |
+------------------+
1 tuple
sql>select upper('ASDFasdfФЫВАфыва');
+------------------+
| %2 |
+==================+
| ASDFASDFФЫВАфыва |
+------------------+
1 tuple
**Expected behavior**
sql>select lower('ASDFasdfФЫВАфыва');
+------------------+
| %2 |
+==================+
| asdfasdfфывафыва |
+------------------+
1 tuple
sql>select upper('ASDFasdfФЫВАфыва');
+------------------+
| %2 |
+==================+
| ASDFASDFФЫВАФЫВА |
+------------------+
1 tuple
**Screenshots**


**Software versions**
- MonetDB version number [a milestone label]
# MonetDB 5 server v11.39.17 (Oct2020-SP5)
# Serving database 'demo', using 8 threads
# Compiled for amd64-pc-windows-msvc/64bit
- OS and version: [e.g. Ubuntu 18.04]
Windows 10 Pro
- Installed from release package or self-installed and compiled
Installed from release package from https://www.monetdb.org/downloads/Windows/Oct2020-SP5/
**Issue labeling **
Make liberal use of the labels to characterise the issue topics. e.g. identify severity, version, etc..
**Additional context**
Илья Хайбуллин wrote:
Good afternoon.
It is necessary to perform a normal search on a string column
without
taking into account the case. Like this: lower(text_column) like
('%lower_text%').
However, the result of running "lower" is not satisfactory for
the
Cyrillic alphabet.
select lower ('dsMKjhовЛДРВвы');
dsmkjhовЛДРВвы
That is, for the Cyrillic alphabet, lower does not work
Are there any ways to solve this problem?
------------
Sjoerd Mullender<sjoerd@monetdb.org>17 мая в 12:15
Does it work to use ILIKE instead of LIKE or = to do the comparison?
I.e. something like: select col ILIKE '%UPPER%' (or '%lower%')?
-------------
Илья Хайбуллин wrote:
ILKE works case-insensitively - that's good, thank you very much.
But the question about the translation to lowercase is still relevant.
--------------
Sjoerd Mullender<sjoerd@monetdb.org>
I'm not denying that, but it's good to know that this is at least a work
around, and that the code implementing the ILIKE operator is good enough
so that it might be used as the basis for the lower function.
In any case, can you report this on our bug tracker, please.
https://bugs.monetdb.org (which should redirect you to github).
| The "lower" and "upper" functions doesn't work for Cyrillic alphabet | https://api.github.com/repos/MonetDB/MonetDB/issues/7126/comments | 0 | 2021-05-17T12:25:58Z | 2024-06-27T13:15:48Z | https://github.com/MonetDB/MonetDB/issues/7126 | 893,279,070 | 7,126 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
- When using round function inside case statement, an error of division by zero occurs
**To Reproduce**
1. Environment and Version
1.1 MonetDB 5 Server 11.39.17 (Oct2020-SP5)
2. Data Creation
2.1 DDL
sql >
-- Create Table
create table round_test(
category String null,
colorby String null,
stddev_dep_delay double null,
usl double null,
lsl double null
);
sql >
-- Insert Sample Data
insert into round_test values ( '2017_6','NK',108.939,8258.0,-64.0 );
insert into round_test values ( '2017_6','VX',null,8258.0,-63.0 );
insert into round_test values ( '2017_6','EV',101.249,8258.0,-64.0 );
insert into round_test values ( '2017_6','UA',112.307,8258.0,-64.0 );
insert into round_test values ( '2017_6','F9',null,8409.0,-64.0 );
insert into round_test values ( '2017_6','B6',null,8258.0,-64.0 );
insert into round_test values ( '2017_6','OO',158.159,8258.0,-64.0 );
insert into round_test values ( '2017_6','WN',null,8258.0,-64.0 );
insert into round_test values ( '2017_6','AA',160.657,8258.0,-64.0 );
insert into round_test values ( '2017_6','DL',44.279,8258.0,-63.0 );
insert into round_test values ( '2017_6','AS',null,8258.0,-64.0 );
insert into round_test values ( '2017_6','HA',0.0,8258.0,-64.0 );
insert into round_test values ( '2017_6',null,null,null,null );
-- Make sure the data is inserted properly
sql > select * from round_test;
+----------+---------+--------------------------+--------------------------+--------------------------+
| category | colorby | stddev_dep_delay | usl | lsl |
+==========+=========+==========================+==========================+==========================+
| 2017_6 | NK | 108.939 | 8258 | -64 |
| 2017_6 | VX | null | 8258 | -63 |
| 2017_6 | EV | 101.249 | 8258 | -64 |
| 2017_6 | UA | 112.307 | 8258 | -64 |
| 2017_6 | F9 | null | 8409 | -64 |
| 2017_6 | B6 | null | 8258 | -64 |
| 2017_6 | OO | 158.159 | 8258 | -64 |
| 2017_6 | WN | null | 8258 | -64 |
| 2017_6 | AA | 160.657 | 8258 | -64 |
| 2017_6 | DL | 44.279 | 8258 | -63 |
| 2017_6 | AS | null | 8258 | -64 |
| 2017_6 | HA | 0 | 8258 | -64 |
| 2017_6 | null | null | null | null |
+----------+---------+--------------------------+--------------------------+--------------------------+
2.2 Data Query Task
sql > select a.*,
case when stddev_dep_delay is null or stddev_dep_delay=0
then -9999
else round(((usl - lsl) / (6 * stddev_dep_delay)), 3) end as pp_dep_delay
from round_test a;
-- Execution result
division by zero.
-- **In previous version (MonetDB 5 server v11.31.13) did not have this problem**
2.3 Other tests to validate the query
/*
SUCCESS - Not use Round function
*/
sql > select a.*,
case when stddev_dep_delay is null or stddev_dep_delay=0
then -9999
else
--round(
((usl - lsl) / (6 * stddev_dep_delay))
-- , 3)
end as pp_dep_delay
from round_test a;
+----------+---------+--------------------------+--------------------------+--------------------------+--------------------------+
| category | colorby | stddev_dep_delay | usl | lsl | pp_dep_delay |
+==========+=========+==========================+==========================+==========================+==========================+
| 2017_6 | NK | 108.939 | 8258 | -64 | 12.731895831612187 |
| 2017_6 | VX | null | 8258 | -63 | -9999 |
| 2017_6 | EV | 101.249 | 8258 | -64 | 13.698900729883754 |
| 2017_6 | UA | 112.307 | 8258 | -64 | 12.350076130606285 |
| 2017_6 | F9 | null | 8409 | -64 | -9999 |
| 2017_6 | B6 | null | 8258 | -64 | -9999 |
| 2017_6 | OO | 158.159 | 8258 | -64 | 8.769655852654607 |
| 2017_6 | WN | null | 8258 | -64 | -9999 |
| 2017_6 | AA | 160.657 | 8258 | -64 | 8.633299513871167 |
| 2017_6 | DL | 44.279 | 8258 | -63 | 31.32033996552165 |
| 2017_6 | AS | null | 8258 | -64 | -9999 |
| 2017_6 | HA | 0 | 8258 | -64 | -9999 |
| 2017_6 | null | null | null | null | -9999 |
+----------+---------+--------------------------+--------------------------+--------------------------+--------------------------+
/*
SUCCESS - using the round function outside the case statement
*/
sql > select a.*, round(
case when stddev_dep_delay is null or stddev_dep_delay=0
then -9999
else ((usl - lsl) / (6 * stddev_dep_delay)) end , 3) as pp_dep_delay
from round_test a;
+----------+---------+--------------------------+--------------------------+--------------------------+--------------------------+
| category | colorby | stddev_dep_delay | usl | lsl | pp_dep_delay |
+==========+=========+==========================+==========================+==========================+==========================+
| 2017_6 | NK | 108.939 | 8258 | -64 | 12.732 |
| 2017_6 | VX | null | 8258 | -63 | -9999 |
| 2017_6 | EV | 101.249 | 8258 | -64 | 13.699 |
| 2017_6 | UA | 112.307 | 8258 | -64 | 12.35 |
| 2017_6 | F9 | null | 8409 | -64 | -9999 |
| 2017_6 | B6 | null | 8258 | -64 | -9999 |
| 2017_6 | OO | 158.159 | 8258 | -64 | 8.77 |
| 2017_6 | WN | null | 8258 | -64 | -9999 |
| 2017_6 | AA | 160.657 | 8258 | -64 | 8.633 |
| 2017_6 | DL | 44.279 | 8258 | -63 | 31.32 |
| 2017_6 | AS | null | 8258 | -64 | -9999 |
| 2017_6 | HA | 0 | 8258 | -64 | -9999 |
| 2017_6 | null | null | null | null | -9999 |
+----------+---------+--------------------------+--------------------------+--------------------------+--------------------------+
**Expected behavior**
-- **In previous version (MonetDB 5 server v11.31.13)**
sql > select a.*,
case when stddev_dep_delay is null or stddev_dep_delay=0
then -9999
else round(((usl - lsl) / (6 * stddev_dep_delay)), 3) end as pp_dep_delay
from round_test a;
+----------+---------+--------------------------+--------------------------+--------------------------+--------------------------+
| category | colorby | stddev_dep_delay | usl | lsl | pp_dep_delay |
+==========+=========+==========================+==========================+==========================+==========================+
| 2017_6 | NK | 108.939 | 8258 | -64 | 12.732 |
| 2017_6 | VX | null | 8258 | -63 | -9999 |
| 2017_6 | EV | 101.249 | 8258 | -64 | 13.699 |
| 2017_6 | UA | 112.307 | 8258 | -64 | 12.35 |
| 2017_6 | F9 | null | 8409 | -64 | -9999 |
| 2017_6 | B6 | null | 8258 | -64 | -9999 |
| 2017_6 | OO | 158.159 | 8258 | -64 | 8.77 |
| 2017_6 | WN | null | 8258 | -64 | -9999 |
| 2017_6 | AA | 160.657 | 8258 | -64 | 8.633 |
| 2017_6 | DL | 44.279 | 8258 | -63 | 31.32 |
| 2017_6 | AS | null | 8258 | -64 | -9999 |
| 2017_6 | HA | 0 | 8258 | -64 | -9999 |
| 2017_6 | null | null | null | null | -9999 |
+----------+---------+--------------------------+--------------------------+--------------------------+--------------------------+
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Software versions**
- MonetDB version number [a milestone label] : MonetDB 5 Server 11.39.17 (Oct2020-SP5)
- OS and version: [e.g. Ubuntu 18.04] CentOS 7.4
- Installed from release package or self-installed and compiled : Installed from release package | MonetDB Round Function issues in the latest release | https://api.github.com/repos/MonetDB/MonetDB/issues/7125/comments | 2 | 2021-05-11T06:52:28Z | 2024-06-27T13:15:47Z | https://github.com/MonetDB/MonetDB/issues/7125 | 886,247,150 | 7,125 |
[
"MonetDB",
"MonetDB"
] | This continues the scenario in #7123 .
Suppose you have a long-running query (SELECT only, no changes to the data), which you stop from another connection with `call sys.stop(the_tag_id)`. You wait a few minutes, then terminate the (first) connection with Ctrl+C. You now issue `monetdb stop the_db_name`.
First, the DB takes much too long to stop. Since the query did not change any data, there's no RAM-resident data which needs to be written to disk, or any other significant housekeeping to do. Yet - it takes over 30 seconds on my system (Intel i7600k, 16GB memory, not otherwise heavily-loaded).
Second, the final result is a crash. That is, the `monetdb stop` command prints `done`, but then I get:
```
name state health remarks
dbname C 1d 9% 10s crashed (started...1-05-08 23:12:42)
```
there shouldn't have been a crash.
**Software versions**
- MonetDB 11.39.11
- Devuan GNU/Linux Beowulf
- Built from source
| DB stop problems with a long-running query | https://api.github.com/repos/MonetDB/MonetDB/issues/7124/comments | 5 | 2021-05-10T09:27:38Z | 2021-10-26T13:47:19Z | https://github.com/MonetDB/MonetDB/issues/7124 | 883,817,762 | 7,124 |
[
"MonetDB",
"MonetDB"
] | When you start a long-running statement on one connection, then, in a second connection, get its tag and issue `call sys.stop(the_tag_of_the_other_statement)` - the statement disappears from the `sys.queue` table, but - the first connection behaves as though it's still running the statement: It doesn't print a prompt, doesn't echo entered characters and won't execute additional statements.
**Expected behavior**
After the stop has been issued, the first connection should print a message about this having happened, and display a prompt for taking the next statement.
**Software versions**
- MonetDB 11.39.11
- Devuan GNU/Linux Beowulf
- Built from source
Note: This happened to me with a SELECT-only query, not a query which changes any data. | Stopping a statement keeps its session locked up | https://api.github.com/repos/MonetDB/MonetDB/issues/7123/comments | 1 | 2021-05-10T08:54:33Z | 2024-06-27T13:15:46Z | https://github.com/MonetDB/MonetDB/issues/7123 | 883,761,454 | 7,123 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
In some cases of "expensive" SELECT-only queries, possibly with queries being made over more than one connection, mserver5 does not stop executing said queries for at least 1 minute (possibly a lot more - I killed the processes) despite the connection which made them having been terminated (Ctrl+C). The number of workers is listed as 0, but the `htop` utility shows a whole lot of CPU use (when there are no other outstanding queries).
I should mention that while this was happening, there was a whole lot of thrashing between memory and disk; might have had something to do with it.
**Expected behavior**
Once an mclient connection is terminated, then within a very brief period (less than a second) execution of any SELECT-only queries issued on that connection should cease. The same should probably true for any statement I suppose.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Software versions**
- MonetDB 11.39.11
- Devuan GNU/Linux Beowulf
- Built from sources
| Query execution does not stop despite disconnection | https://api.github.com/repos/MonetDB/MonetDB/issues/7122/comments | 2 | 2021-05-08T20:01:39Z | 2024-06-27T13:15:45Z | https://github.com/MonetDB/MonetDB/issues/7122 | 881,247,095 | 7,122 |
[
"MonetDB",
"MonetDB"
] | I'm using MonetDB with Pentaho to analyze a very large fact table (around 200 millons rows and more than 50 columns). MonetDB Jun2020 release implemented grouping sets, and I think this feature would increase aggregates performance noticeably.
Today, using grouping sets in Pentaho Mondrian is a feature limited to a couple of databases (Oracle, DB2 and Greenplum), and it'd be great having MonetDB in such a selected group.
| Enable grouping sets in Mondrian for MonetDB | https://api.github.com/repos/MonetDB/MonetDB/issues/7121/comments | 4 | 2021-05-07T21:46:47Z | 2024-06-07T12:26:09Z | https://github.com/MonetDB/MonetDB/issues/7121 | 879,801,184 | 7,121 |
[
"MonetDB",
"MonetDB"
] | If I search monetdb.org, for, say, ["functions"](https://www.monetdb.org/index.php/search/node?keys=functions), all the results I get have links to localhost:8080/whatever . I don't remember search being busted like this when I last used it... | MonetDB website search shows result links to localhost:8080 | https://api.github.com/repos/MonetDB/MonetDB/issues/7120/comments | 1 | 2021-05-07T17:15:50Z | 2024-06-27T13:15:44Z | https://github.com/MonetDB/MonetDB/issues/7120 | 879,384,188 | 7,120 |
[
"MonetDB",
"MonetDB"
] | MonetDB offers (?) the `sys.pause()` and `sys.stop()` procedures which apply to running statements. These procedures take a statement "tag id"; but when you issue a statement either interactively or with `mclient -s`, you don't get told what the new "tag id" is. While you can query it using the `sys.queue` table, it would be convenient if you could just be told what the query tag is - if a message could be printed on execution start, e.g.
```
Executing statement 1234.
```
Obviously, this should be printed by default; users don't expect to get this, and certainly in non-interactive mode you might want just the raw results. But there should be an option to enable it being printed IMHO.
**Software versions**
- MonetDB 11.39.11
- Debuan GNU/Linux Beowulf
- Built from source
| Add option to have the tag ID printed when execution starts | https://api.github.com/repos/MonetDB/MonetDB/issues/7119/comments | 3 | 2021-05-07T17:13:02Z | 2022-07-26T09:28:55Z | https://github.com/MonetDB/MonetDB/issues/7119 | 879,379,985 | 7,119 |
[
"MonetDB",
"MonetDB"
] | Suppose I have two large tables, the product of whose sizes is beyond the heap allocation capacity. If I write:
```
SELECT COUNT(*)
FROM t1, t2
WHERE false;
```
I get the error:
```
GDK reported error: GDKextendf: could not extend file: File too large
HEAPalloc: Insufficient space for HEAP of 40577078061056 bytes.
```
Clearly, no large allocation is needed here. This situation can probably be generalized somewhat, but I'm not sure exactly how.
**Software versions**
- MonetDB 11.39.11
- Devuan GNU/Linux Beowulf
- Built from source
| Unnecessary heap allocation fails simple query | https://api.github.com/repos/MonetDB/MonetDB/issues/7118/comments | 3 | 2021-05-07T10:33:33Z | 2024-06-28T10:07:22Z | https://github.com/MonetDB/MonetDB/issues/7118 | 878,745,904 | 7,118 |
[
"MonetDB",
"MonetDB"
] | I've just run a query taking a few minutes of a DBMS, and which creates a new table t2 from data in table t1.
During the run of this query, I ran another query, which is merely a SELECT, and which only regards table t1. This query takes a lot less time and concludes successfully.
After a while, I come to check on the long query, only to find the message:
```
> operation successful
> COMMIT: transaction is aborted because of concurrency conflicts, will ROLLBACK instead
```
but - there were no concurrency conflicts; and I mean that in two ways:
* The only changes are in a new table.
* Even if the changes had been to the old table, it wouldn't have been a problem to commit them, since by the time the question of whether to commit came up, no other query was running.
Unfortunately I don't have easy reproduction instructions.
**Expected behavior**
The new table should have been created.
**Software versions**
- MonetDB 11.39.11
- Devuan GNU/Linux Beowulf
- Built from source.
| false concurrency concern prevents commit | https://api.github.com/repos/MonetDB/MonetDB/issues/7117/comments | 5 | 2021-05-05T20:30:42Z | 2024-06-07T12:28:14Z | https://github.com/MonetDB/MonetDB/issues/7117 | 876,809,903 | 7,117 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
When I try to create a custom filter function, I get:
```
sql>CREATE OR REPLACE FILTER FUNCTION maxlevenshtein(s1 string, s2 string, k tinyint) EXTERNAL NAME spinque."maxlevenshtein";
CREATE FILTER FUNCTION: there's a function with the name 'maxlevenshtein' and the same parameters, which causes ambiguous calls
```
However, no such function was created before:
```
sql>select * from sys.functions where name = 'maxlevenshtein';
+----+------+------+-----+----------+------+-------------+--------+--------+-----------+--------+-----------+
| id | name | func | mod | language | type | side_effect | varres | vararg | schema_id | system | semantics |
+====+======+======+=====+==========+======+=============+========+========+===========+========+===========+
+----+------+------+-----+----------+------+-------------+--------+--------+-----------+--------+-----------+
0 tuples
```
No issue with normal functions.
**Software versions**
- MonetDB 5 server 11.41.0 (hg id: f10cd9f0c1)
- OS and version: Fedora 33
- compiled from sources
| Jul2021: Cannot create filter functions | https://api.github.com/repos/MonetDB/MonetDB/issues/7116/comments | 7 | 2021-05-03T16:05:03Z | 2024-06-27T13:15:41Z | https://github.com/MonetDB/MonetDB/issues/7116 | 874,703,878 | 7,116 |
[
"MonetDB",
"MonetDB"
] | **Describe the bug**
When trying to upgrade an Oct2020 database, I get the following errors (from `mdbtrace.log`):
```
2021-05-03 17:24:56 M_ERROR MAL_SERVER main thread monetdb5/mal/mal_exception.c:103 createExceptionInternal ParseException:SQLparser:42000!DROP PROCEDURE: no such procedure 'deltas' (clob)
2021-05-03 17:24:56 M_CRITICAL SQL_PARSER main thread sql/backends/monet5/sql_upgrades.c:3588 SQLupgrades ParseException:SQLparser:42000!DROP PROCEDURE: no such procedure 'deltas' (clob)
```
**Software versions**
- MonetDB 5 server 11.41.0 (hg id: f10cd9f0c1)
- OS and version: Fedora 33
- compiled from sources
| Jul2021: ParseException while upgrading Oct2020 database | https://api.github.com/repos/MonetDB/MonetDB/issues/7115/comments | 1 | 2021-05-03T15:36:44Z | 2024-06-27T13:15:41Z | https://github.com/MonetDB/MonetDB/issues/7115 | 874,682,071 | 7,115 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.